Test Report: Docker_macOS 15565

                    
                      b70896c80ee4e66ab69b71a68ac4d59d2145555e:2023-01-08:27335
                    
                

Test fail (16/295)

x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (258.44s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-123658 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0108 12:37:43.401167    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
E0108 12:39:59.548434    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
E0108 12:40:16.833659    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
E0108 12:40:16.839114    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
E0108 12:40:16.849503    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
E0108 12:40:16.871656    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
E0108 12:40:16.912564    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
E0108 12:40:16.993007    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
E0108 12:40:17.155071    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
E0108 12:40:17.475476    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
E0108 12:40:18.117288    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
E0108 12:40:19.397763    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
E0108 12:40:21.958226    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
E0108 12:40:27.079027    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
E0108 12:40:27.239677    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
E0108 12:40:37.319210    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
E0108 12:40:57.799599    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-123658 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m18.412502499s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-123658] minikube v1.28.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2761/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-123658 in cluster ingress-addon-legacy-123658
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 20.10.21 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 12:36:58.222386    6872 out.go:296] Setting OutFile to fd 1 ...
	I0108 12:36:58.222560    6872 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 12:36:58.222566    6872 out.go:309] Setting ErrFile to fd 2...
	I0108 12:36:58.222570    6872 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 12:36:58.222692    6872 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-2761/.minikube/bin
	I0108 12:36:58.223233    6872 out.go:303] Setting JSON to false
	I0108 12:36:58.241780    6872 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":2191,"bootTime":1673208027,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0108 12:36:58.241879    6872 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0108 12:36:58.263595    6872 out.go:177] * [ingress-addon-legacy-123658] minikube v1.28.0 on Darwin 13.0.1
	I0108 12:36:58.305536    6872 notify.go:220] Checking for updates...
	I0108 12:36:58.327527    6872 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 12:36:58.348544    6872 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2761/kubeconfig
	I0108 12:36:58.370489    6872 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0108 12:36:58.391731    6872 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 12:36:58.450400    6872 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube
	I0108 12:36:58.474012    6872 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 12:36:58.535470    6872 docker.go:137] docker version: linux-20.10.21
	I0108 12:36:58.535623    6872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 12:36:58.674862    6872 info.go:266] docker info: {ID:5ZGC:6Z7C:CMVQ:QZDS:ZWTS:M4CQ:373D:RAJX:R4QB:QI3D:27UQ:FR5B Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:47 SystemTime:2023-01-08 20:36:58.584048081 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0108 12:36:58.718631    6872 out.go:177] * Using the docker driver based on user configuration
	I0108 12:36:58.740499    6872 start.go:294] selected driver: docker
	I0108 12:36:58.740529    6872 start.go:838] validating driver "docker" against <nil>
	I0108 12:36:58.740554    6872 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 12:36:58.744435    6872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 12:36:58.885073    6872 info.go:266] docker info: {ID:5ZGC:6Z7C:CMVQ:QZDS:ZWTS:M4CQ:373D:RAJX:R4QB:QI3D:27UQ:FR5B Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:47 SystemTime:2023-01-08 20:36:58.794100374 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0108 12:36:58.885190    6872 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I0108 12:36:58.885348    6872 start_flags.go:910] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 12:36:58.907224    6872 out.go:177] * Using Docker Desktop driver with root privileges
	I0108 12:36:58.928947    6872 cni.go:95] Creating CNI manager for ""
	I0108 12:36:58.929010    6872 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0108 12:36:58.929025    6872 start_flags.go:317] config:
	{Name:ingress-addon-legacy-123658 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-123658 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 12:36:58.950874    6872 out.go:177] * Starting control plane node ingress-addon-legacy-123658 in cluster ingress-addon-legacy-123658
	I0108 12:36:58.972042    6872 cache.go:120] Beginning downloading kic base image for docker with docker
	I0108 12:36:58.994019    6872 out.go:177] * Pulling base image ...
	I0108 12:36:59.037011    6872 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0108 12:36:59.037056    6872 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 12:36:59.094014    6872 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 12:36:59.094040    6872 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 12:36:59.144172    6872 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0108 12:36:59.144211    6872 cache.go:57] Caching tarball of preloaded images
	I0108 12:36:59.144649    6872 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0108 12:36:59.188108    6872 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0108 12:36:59.209518    6872 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0108 12:36:59.438706    6872 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0108 12:37:07.137184    6872 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0108 12:37:07.137372    6872 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0108 12:37:07.751960    6872 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0108 12:37:07.752218    6872 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/config.json ...
	I0108 12:37:07.752250    6872 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/config.json: {Name:mk13145cfd20d96138dbac72623c70117000dca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 12:37:07.752634    6872 cache.go:193] Successfully downloaded all kic artifacts
	I0108 12:37:07.752662    6872 start.go:364] acquiring machines lock for ingress-addon-legacy-123658: {Name:mka9a351a5744740a5234f841f3cecbaf2564f33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 12:37:07.752838    6872 start.go:368] acquired machines lock for "ingress-addon-legacy-123658" in 169.088µs
	I0108 12:37:07.752864    6872 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-123658 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-123658 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0108 12:37:07.753007    6872 start.go:125] createHost starting for "" (driver="docker")
	I0108 12:37:07.805795    6872 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0108 12:37:07.806122    6872 start.go:159] libmachine.API.Create for "ingress-addon-legacy-123658" (driver="docker")
	I0108 12:37:07.806168    6872 client.go:168] LocalClient.Create starting
	I0108 12:37:07.806391    6872 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem
	I0108 12:37:07.806489    6872 main.go:134] libmachine: Decoding PEM data...
	I0108 12:37:07.806527    6872 main.go:134] libmachine: Parsing certificate...
	I0108 12:37:07.806615    6872 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem
	I0108 12:37:07.806681    6872 main.go:134] libmachine: Decoding PEM data...
	I0108 12:37:07.806698    6872 main.go:134] libmachine: Parsing certificate...
	I0108 12:37:07.807575    6872 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-123658 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0108 12:37:07.865230    6872 cli_runner.go:211] docker network inspect ingress-addon-legacy-123658 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0108 12:37:07.865344    6872 network_create.go:272] running [docker network inspect ingress-addon-legacy-123658] to gather additional debugging logs...
	I0108 12:37:07.865366    6872 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-123658
	W0108 12:37:07.919462    6872 cli_runner.go:211] docker network inspect ingress-addon-legacy-123658 returned with exit code 1
	I0108 12:37:07.919493    6872 network_create.go:275] error running [docker network inspect ingress-addon-legacy-123658]: docker network inspect ingress-addon-legacy-123658: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-123658
	I0108 12:37:07.919522    6872 network_create.go:277] output of [docker network inspect ingress-addon-legacy-123658]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-123658
	
	** /stderr **
	I0108 12:37:07.919635    6872 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 12:37:07.974969    6872 network.go:306] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00011d8c8] misses:0}
	I0108 12:37:07.975007    6872 network.go:239] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0108 12:37:07.975023    6872 network_create.go:115] attempt to create docker network ingress-addon-legacy-123658 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0108 12:37:07.975123    6872 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-123658 ingress-addon-legacy-123658
	I0108 12:37:08.068024    6872 network_create.go:99] docker network ingress-addon-legacy-123658 192.168.49.0/24 created
	I0108 12:37:08.068067    6872 kic.go:106] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-123658" container
	I0108 12:37:08.068210    6872 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0108 12:37:08.122354    6872 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-123658 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-123658 --label created_by.minikube.sigs.k8s.io=true
	I0108 12:37:08.176217    6872 oci.go:103] Successfully created a docker volume ingress-addon-legacy-123658
	I0108 12:37:08.176358    6872 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-123658-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-123658 --entrypoint /usr/bin/test -v ingress-addon-legacy-123658:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -d /var/lib
	I0108 12:37:08.618945    6872 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-123658
	I0108 12:37:08.618984    6872 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0108 12:37:08.619000    6872 kic.go:179] Starting extracting preloaded images to volume ...
	I0108 12:37:08.619129    6872 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-123658:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir
	I0108 12:37:14.637774    6872 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-123658:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir: (6.018612761s)
	I0108 12:37:14.637800    6872 kic.go:188] duration metric: took 6.018871 seconds to extract preloaded images to volume
	I0108 12:37:14.637938    6872 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0108 12:37:14.781648    6872 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-123658 --name ingress-addon-legacy-123658 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-123658 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-123658 --network ingress-addon-legacy-123658 --ip 192.168.49.2 --volume ingress-addon-legacy-123658:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c
	I0108 12:37:15.129060    6872 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-123658 --format={{.State.Running}}
	I0108 12:37:15.190224    6872 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-123658 --format={{.State.Status}}
	I0108 12:37:15.253766    6872 cli_runner.go:164] Run: docker exec ingress-addon-legacy-123658 stat /var/lib/dpkg/alternatives/iptables
	I0108 12:37:15.368111    6872 oci.go:144] the created container "ingress-addon-legacy-123658" has a running status.
	I0108 12:37:15.368154    6872 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/ingress-addon-legacy-123658/id_rsa...
	I0108 12:37:15.450111    6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/ingress-addon-legacy-123658/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0108 12:37:15.450208    6872 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/ingress-addon-legacy-123658/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0108 12:37:15.560982    6872 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-123658 --format={{.State.Status}}
	I0108 12:37:15.619501    6872 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0108 12:37:15.619520    6872 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-123658 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0108 12:37:15.724202    6872 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-123658 --format={{.State.Status}}
	I0108 12:37:15.782063    6872 machine.go:88] provisioning docker machine ...
	I0108 12:37:15.782107    6872 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-123658"
	I0108 12:37:15.782218    6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-123658
	I0108 12:37:15.839847    6872 main.go:134] libmachine: Using SSH client type: native
	I0108 12:37:15.840049    6872 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 50561 <nil> <nil>}
	I0108 12:37:15.840064    6872 main.go:134] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-123658 && echo "ingress-addon-legacy-123658" | sudo tee /etc/hostname
	I0108 12:37:15.967595    6872 main.go:134] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-123658
	
	I0108 12:37:15.967704    6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-123658
	I0108 12:37:16.025641    6872 main.go:134] libmachine: Using SSH client type: native
	I0108 12:37:16.025816    6872 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 50561 <nil> <nil>}
	I0108 12:37:16.025832    6872 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-123658' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-123658/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-123658' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 12:37:16.146397    6872 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 12:37:16.146419    6872 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-2761/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-2761/.minikube}
	I0108 12:37:16.146438    6872 ubuntu.go:177] setting up certificates
	I0108 12:37:16.146446    6872 provision.go:83] configureAuth start
	I0108 12:37:16.146541    6872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-123658
	I0108 12:37:16.204001    6872 provision.go:138] copyHostCerts
	I0108 12:37:16.204064    6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem
	I0108 12:37:16.204124    6872 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem, removing ...
	I0108 12:37:16.204131    6872 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem
	I0108 12:37:16.204253    6872 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem (1082 bytes)
	I0108 12:37:16.204428    6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem
	I0108 12:37:16.204473    6872 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem, removing ...
	I0108 12:37:16.204478    6872 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem
	I0108 12:37:16.204549    6872 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem (1123 bytes)
	I0108 12:37:16.204684    6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem
	I0108 12:37:16.204726    6872 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem, removing ...
	I0108 12:37:16.204730    6872 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem
	I0108 12:37:16.204798    6872 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem (1675 bytes)
	I0108 12:37:16.204925    6872 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-123658 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-123658]
	I0108 12:37:16.312882    6872 provision.go:172] copyRemoteCerts
	I0108 12:37:16.312942    6872 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 12:37:16.313005    6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-123658
	I0108 12:37:16.370128    6872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50561 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/ingress-addon-legacy-123658/id_rsa Username:docker}
	I0108 12:37:16.455432    6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 12:37:16.455534    6872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0108 12:37:16.472398    6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 12:37:16.472480    6872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 12:37:16.490255    6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 12:37:16.490337    6872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 12:37:16.507438    6872 provision.go:86] duration metric: configureAuth took 360.984333ms
	I0108 12:37:16.507453    6872 ubuntu.go:193] setting minikube options for container-runtime
	I0108 12:37:16.507612    6872 config.go:180] Loaded profile config "ingress-addon-legacy-123658": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0108 12:37:16.507688    6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-123658
	I0108 12:37:16.565205    6872 main.go:134] libmachine: Using SSH client type: native
	I0108 12:37:16.565364    6872 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 50561 <nil> <nil>}
	I0108 12:37:16.565376    6872 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0108 12:37:16.682349    6872 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0108 12:37:16.682367    6872 ubuntu.go:71] root file system type: overlay
	I0108 12:37:16.682506    6872 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0108 12:37:16.682599    6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-123658
	I0108 12:37:16.740597    6872 main.go:134] libmachine: Using SSH client type: native
	I0108 12:37:16.740762    6872 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 50561 <nil> <nil>}
	I0108 12:37:16.740813    6872 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0108 12:37:16.868535    6872 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0108 12:37:16.868665    6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-123658
	I0108 12:37:16.927877    6872 main.go:134] libmachine: Using SSH client type: native
	I0108 12:37:16.928050    6872 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 50561 <nil> <nil>}
	I0108 12:37:16.928064    6872 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0108 12:37:17.515872    6872 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-10-25 18:00:04.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-08 20:37:16.866135097 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0108 12:37:17.515910    6872 machine.go:91] provisioned docker machine in 1.73384737s
	I0108 12:37:17.515930    6872 client.go:171] LocalClient.Create took 9.709870207s
	I0108 12:37:17.515947    6872 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-123658" took 9.709942799s
	I0108 12:37:17.515958    6872 start.go:300] post-start starting for "ingress-addon-legacy-123658" (driver="docker")
	I0108 12:37:17.515966    6872 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 12:37:17.516093    6872 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 12:37:17.516214    6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-123658
	I0108 12:37:17.575304    6872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50561 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/ingress-addon-legacy-123658/id_rsa Username:docker}
	I0108 12:37:17.663004    6872 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 12:37:17.666526    6872 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 12:37:17.666543    6872 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 12:37:17.666556    6872 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 12:37:17.666562    6872 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 12:37:17.666572    6872 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2761/.minikube/addons for local assets ...
	I0108 12:37:17.666664    6872 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2761/.minikube/files for local assets ...
	I0108 12:37:17.666847    6872 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem -> 40832.pem in /etc/ssl/certs
	I0108 12:37:17.666853    6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem -> /etc/ssl/certs/40832.pem
	I0108 12:37:17.667078    6872 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 12:37:17.674461    6872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem --> /etc/ssl/certs/40832.pem (1708 bytes)
	I0108 12:37:17.691641    6872 start.go:303] post-start completed in 175.675724ms
	I0108 12:37:17.692226    6872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-123658
	I0108 12:37:17.750933    6872 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/config.json ...
	I0108 12:37:17.751381    6872 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 12:37:17.751465    6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-123658
	I0108 12:37:17.807814    6872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50561 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/ingress-addon-legacy-123658/id_rsa Username:docker}
	I0108 12:37:17.892935    6872 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 12:37:17.897352    6872 start.go:128] duration metric: createHost completed in 10.144458228s
	I0108 12:37:17.897369    6872 start.go:83] releasing machines lock for "ingress-addon-legacy-123658", held for 10.144642002s
	I0108 12:37:17.897470    6872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-123658
	I0108 12:37:17.954806    6872 ssh_runner.go:195] Run: cat /version.json
	I0108 12:37:17.954834    6872 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0108 12:37:17.954893    6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-123658
	I0108 12:37:17.954917    6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-123658
	I0108 12:37:18.018781    6872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50561 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/ingress-addon-legacy-123658/id_rsa Username:docker}
	I0108 12:37:18.018907    6872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50561 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/ingress-addon-legacy-123658/id_rsa Username:docker}
	I0108 12:37:18.362152    6872 ssh_runner.go:195] Run: systemctl --version
	I0108 12:37:18.367033    6872 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0108 12:37:18.376839    6872 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0108 12:37:18.376904    6872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 12:37:18.386530    6872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 12:37:18.399469    6872 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0108 12:37:18.469692    6872 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0108 12:37:18.541073    6872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 12:37:18.610109    6872 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 12:37:18.822363    6872 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 12:37:18.853711    6872 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 12:37:18.930903    6872 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 20.10.21 ...
	I0108 12:37:18.931149    6872 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-123658 dig +short host.docker.internal
	I0108 12:37:19.046619    6872 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0108 12:37:19.046738    6872 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0108 12:37:19.051083    6872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 12:37:19.061084    6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-123658
	I0108 12:37:19.120796    6872 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0108 12:37:19.120900    6872 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 12:37:19.144666    6872 docker.go:613] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0108 12:37:19.144685    6872 docker.go:543] Images already preloaded, skipping extraction
	I0108 12:37:19.144774    6872 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 12:37:19.170889    6872 docker.go:613] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0108 12:37:19.170920    6872 cache_images.go:84] Images are preloaded, skipping loading
	I0108 12:37:19.171022    6872 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0108 12:37:19.239917    6872 cni.go:95] Creating CNI manager for ""
	I0108 12:37:19.239936    6872 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0108 12:37:19.239964    6872 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 12:37:19.239980    6872 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-123658 NodeName:ingress-addon-legacy-123658 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 12:37:19.240123    6872 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-123658"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 12:37:19.240211    6872 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-123658 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-123658 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 12:37:19.240286    6872 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0108 12:37:19.248146    6872 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 12:37:19.248215    6872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 12:37:19.255696    6872 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0108 12:37:19.268812    6872 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0108 12:37:19.281890    6872 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2070 bytes)
	I0108 12:37:19.294893    6872 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0108 12:37:19.298825    6872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 12:37:19.308798    6872 certs.go:54] Setting up /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658 for IP: 192.168.49.2
	I0108 12:37:19.308960    6872 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.key
	I0108 12:37:19.309040    6872 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.key
	I0108 12:37:19.309090    6872 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/client.key
	I0108 12:37:19.309108    6872 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/client.crt with IP's: []
	I0108 12:37:19.445343    6872 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/client.crt ...
	I0108 12:37:19.445355    6872 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/client.crt: {Name:mk84f7860d5c3b6cc55150059aadf2f55a36fd00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 12:37:19.445740    6872 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/client.key ...
	I0108 12:37:19.445748    6872 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/client.key: {Name:mka89f32d4824ab11494b2ccc762c8d45e2a2f59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 12:37:19.445964    6872 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/apiserver.key.dd3b5fb2
	I0108 12:37:19.445982    6872 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 12:37:19.519320    6872 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/apiserver.crt.dd3b5fb2 ...
	I0108 12:37:19.519328    6872 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/apiserver.crt.dd3b5fb2: {Name:mka63b112e800d0a58356444d154c62037b034b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 12:37:19.519556    6872 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/apiserver.key.dd3b5fb2 ...
	I0108 12:37:19.519563    6872 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/apiserver.key.dd3b5fb2: {Name:mk885f55cf4c1efcb0608b93715a9b7a860b54ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 12:37:19.519748    6872 certs.go:320] copying /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/apiserver.crt
	I0108 12:37:19.519920    6872 certs.go:324] copying /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/apiserver.key
	I0108 12:37:19.520105    6872 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/proxy-client.key
	I0108 12:37:19.520124    6872 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/proxy-client.crt with IP's: []
	I0108 12:37:19.662437    6872 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/proxy-client.crt ...
	I0108 12:37:19.662446    6872 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/proxy-client.crt: {Name:mk2d2c053a1ce2e9a514e94c944bde5fd264199d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 12:37:19.662729    6872 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/proxy-client.key ...
	I0108 12:37:19.662737    6872 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/proxy-client.key: {Name:mk683f0f8a703c2f5ba7127ed4fd24655f2d9618 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 12:37:19.662924    6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0108 12:37:19.662956    6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0108 12:37:19.662983    6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0108 12:37:19.663006    6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0108 12:37:19.663029    6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 12:37:19.663050    6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 12:37:19.663069    6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 12:37:19.663089    6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 12:37:19.663195    6872 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083.pem (1338 bytes)
	W0108 12:37:19.663245    6872 certs.go:384] ignoring /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083_empty.pem, impossibly tiny 0 bytes
	I0108 12:37:19.663256    6872 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 12:37:19.663337    6872 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem (1082 bytes)
	I0108 12:37:19.663374    6872 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem (1123 bytes)
	I0108 12:37:19.663408    6872 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem (1675 bytes)
	I0108 12:37:19.663484    6872 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem (1708 bytes)
	I0108 12:37:19.663522    6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem -> /usr/share/ca-certificates/40832.pem
	I0108 12:37:19.663545    6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 12:37:19.663566    6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083.pem -> /usr/share/ca-certificates/4083.pem
	I0108 12:37:19.664069    6872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 12:37:19.683555    6872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 12:37:19.700970    6872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 12:37:19.719044    6872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 12:37:19.736200    6872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 12:37:19.753513    6872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 12:37:19.770679    6872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 12:37:19.788217    6872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 12:37:19.806214    6872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem --> /usr/share/ca-certificates/40832.pem (1708 bytes)
	I0108 12:37:19.823864    6872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 12:37:19.841502    6872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083.pem --> /usr/share/ca-certificates/4083.pem (1338 bytes)
	I0108 12:37:19.859216    6872 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 12:37:19.872748    6872 ssh_runner.go:195] Run: openssl version
	I0108 12:37:19.878388    6872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4083.pem && ln -fs /usr/share/ca-certificates/4083.pem /etc/ssl/certs/4083.pem"
	I0108 12:37:19.886764    6872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4083.pem
	I0108 12:37:19.891015    6872 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:32 /usr/share/ca-certificates/4083.pem
	I0108 12:37:19.891075    6872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4083.pem
	I0108 12:37:19.896648    6872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4083.pem /etc/ssl/certs/51391683.0"
	I0108 12:37:19.904794    6872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/40832.pem && ln -fs /usr/share/ca-certificates/40832.pem /etc/ssl/certs/40832.pem"
	I0108 12:37:19.913305    6872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/40832.pem
	I0108 12:37:19.917611    6872 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:32 /usr/share/ca-certificates/40832.pem
	I0108 12:37:19.917666    6872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/40832.pem
	I0108 12:37:19.923187    6872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/40832.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 12:37:19.931457    6872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 12:37:19.939545    6872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 12:37:19.943814    6872 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:27 /usr/share/ca-certificates/minikubeCA.pem
	I0108 12:37:19.943910    6872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 12:37:19.949473    6872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 12:37:19.957589    6872 kubeadm.go:396] StartCluster: {Name:ingress-addon-legacy-123658 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-123658 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 12:37:19.957793    6872 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 12:37:19.980719    6872 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 12:37:19.988846    6872 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 12:37:19.996341    6872 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 12:37:19.996435    6872 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 12:37:20.004005    6872 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 12:37:20.004033    6872 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 12:37:20.053056    6872 kubeadm.go:317] [init] Using Kubernetes version: v1.18.20
	I0108 12:37:20.053093    6872 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 12:37:20.357188    6872 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 12:37:20.357277    6872 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 12:37:20.357399    6872 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 12:37:20.579992    6872 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 12:37:20.580927    6872 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 12:37:20.580974    6872 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0108 12:37:20.649044    6872 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 12:37:20.671978    6872 out.go:204]   - Generating certificates and keys ...
	I0108 12:37:20.672060    6872 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 12:37:20.672123    6872 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 12:37:20.731453    6872 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 12:37:20.808804    6872 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I0108 12:37:20.847089    6872 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I0108 12:37:21.227315    6872 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I0108 12:37:21.416597    6872 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I0108 12:37:21.416773    6872 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-123658 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0108 12:37:21.468123    6872 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I0108 12:37:21.468228    6872 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-123658 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0108 12:37:21.650473    6872 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 12:37:21.701479    6872 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 12:37:21.815610    6872 kubeadm.go:317] [certs] Generating "sa" key and public key
	I0108 12:37:21.815700    6872 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 12:37:21.999951    6872 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 12:37:22.071347    6872 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 12:37:22.117671    6872 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 12:37:22.223521    6872 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 12:37:22.224301    6872 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 12:37:22.245954    6872 out.go:204]   - Booting up control plane ...
	I0108 12:37:22.246043    6872 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 12:37:22.246117    6872 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 12:37:22.246180    6872 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 12:37:22.246268    6872 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 12:37:22.246397    6872 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 12:38:02.234106    6872 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0108 12:38:02.234853    6872 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 12:38:02.235044    6872 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 12:38:07.236415    6872 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 12:38:07.236641    6872 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 12:38:17.237099    6872 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 12:38:17.237282    6872 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 12:38:37.238652    6872 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 12:38:37.238858    6872 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 12:39:17.238826    6872 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 12:39:17.238989    6872 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 12:39:17.239007    6872 kubeadm.go:317] 
	I0108 12:39:17.239052    6872 kubeadm.go:317] 	Unfortunately, an error has occurred:
	I0108 12:39:17.239102    6872 kubeadm.go:317] 		timed out waiting for the condition
	I0108 12:39:17.239114    6872 kubeadm.go:317] 
	I0108 12:39:17.239150    6872 kubeadm.go:317] 	This error is likely caused by:
	I0108 12:39:17.239182    6872 kubeadm.go:317] 		- The kubelet is not running
	I0108 12:39:17.239289    6872 kubeadm.go:317] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0108 12:39:17.239305    6872 kubeadm.go:317] 
	I0108 12:39:17.239431    6872 kubeadm.go:317] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0108 12:39:17.239492    6872 kubeadm.go:317] 		- 'systemctl status kubelet'
	I0108 12:39:17.239531    6872 kubeadm.go:317] 		- 'journalctl -xeu kubelet'
	I0108 12:39:17.239546    6872 kubeadm.go:317] 
	I0108 12:39:17.239659    6872 kubeadm.go:317] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0108 12:39:17.239732    6872 kubeadm.go:317] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0108 12:39:17.239738    6872 kubeadm.go:317] 
	I0108 12:39:17.239827    6872 kubeadm.go:317] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0108 12:39:17.239863    6872 kubeadm.go:317] 		- 'docker ps -a | grep kube | grep -v pause'
	I0108 12:39:17.239920    6872 kubeadm.go:317] 		Once you have found the failing container, you can inspect its logs with:
	I0108 12:39:17.239976    6872 kubeadm.go:317] 		- 'docker logs CONTAINERID'
	I0108 12:39:17.239995    6872 kubeadm.go:317] 
	I0108 12:39:17.242130    6872 kubeadm.go:317] W0108 20:37:20.052101     958 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0108 12:39:17.242192    6872 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0108 12:39:17.242290    6872 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 19.03
	I0108 12:39:17.242377    6872 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 12:39:17.242499    6872 kubeadm.go:317] W0108 20:37:22.229132     958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0108 12:39:17.242609    6872 kubeadm.go:317] W0108 20:37:22.230118     958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0108 12:39:17.242669    6872 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0108 12:39:17.242737    6872 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W0108 12:39:17.242976    6872 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-123658 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-123658 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0108 20:37:20.052101     958 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0108 20:37:22.229132     958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0108 20:37:22.230118     958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-123658 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-123658 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0108 20:37:20.052101     958 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0108 20:37:22.229132     958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0108 20:37:22.230118     958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0108 12:39:17.243014    6872 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0108 12:39:17.658029    6872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 12:39:17.667913    6872 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 12:39:17.667980    6872 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 12:39:17.675447    6872 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 12:39:17.675474    6872 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 12:39:17.722107    6872 kubeadm.go:317] [init] Using Kubernetes version: v1.18.20
	I0108 12:39:17.722167    6872 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 12:39:18.009810    6872 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 12:39:18.009898    6872 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 12:39:18.009967    6872 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 12:39:18.228579    6872 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 12:39:18.242865    6872 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 12:39:18.242899    6872 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0108 12:39:18.298448    6872 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 12:39:18.320227    6872 out.go:204]   - Generating certificates and keys ...
	I0108 12:39:18.320331    6872 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 12:39:18.320403    6872 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 12:39:18.320481    6872 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 12:39:18.320576    6872 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0108 12:39:18.320731    6872 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 12:39:18.320799    6872 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0108 12:39:18.320889    6872 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0108 12:39:18.320949    6872 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0108 12:39:18.321041    6872 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 12:39:18.321140    6872 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 12:39:18.321209    6872 kubeadm.go:317] [certs] Using the existing "sa" key
	I0108 12:39:18.321271    6872 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 12:39:18.466596    6872 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 12:39:18.669614    6872 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 12:39:18.882171    6872 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 12:39:18.967242    6872 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 12:39:18.968004    6872 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 12:39:18.991842    6872 out.go:204]   - Booting up control plane ...
	I0108 12:39:18.992081    6872 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 12:39:18.992245    6872 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 12:39:18.992384    6872 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 12:39:18.992571    6872 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 12:39:18.992843    6872 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 12:39:58.976690    6872 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0108 12:39:58.977491    6872 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 12:39:58.977714    6872 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 12:40:03.978945    6872 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 12:40:03.979148    6872 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 12:40:13.980539    6872 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 12:40:13.980748    6872 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 12:40:33.982885    6872 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 12:40:33.983103    6872 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 12:41:13.983801    6872 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 12:41:13.984085    6872 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 12:41:13.984107    6872 kubeadm.go:317] 
	I0108 12:41:13.984182    6872 kubeadm.go:317] 	Unfortunately, an error has occurred:
	I0108 12:41:13.984231    6872 kubeadm.go:317] 		timed out waiting for the condition
	I0108 12:41:13.984241    6872 kubeadm.go:317] 
	I0108 12:41:13.984276    6872 kubeadm.go:317] 	This error is likely caused by:
	I0108 12:41:13.984321    6872 kubeadm.go:317] 		- The kubelet is not running
	I0108 12:41:13.984430    6872 kubeadm.go:317] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0108 12:41:13.984440    6872 kubeadm.go:317] 
	I0108 12:41:13.984540    6872 kubeadm.go:317] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0108 12:41:13.984607    6872 kubeadm.go:317] 		- 'systemctl status kubelet'
	I0108 12:41:13.984657    6872 kubeadm.go:317] 		- 'journalctl -xeu kubelet'
	I0108 12:41:13.984668    6872 kubeadm.go:317] 
	I0108 12:41:13.984805    6872 kubeadm.go:317] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0108 12:41:13.984909    6872 kubeadm.go:317] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0108 12:41:13.984923    6872 kubeadm.go:317] 
	I0108 12:41:13.985039    6872 kubeadm.go:317] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0108 12:41:13.985115    6872 kubeadm.go:317] 		- 'docker ps -a | grep kube | grep -v pause'
	I0108 12:41:13.985212    6872 kubeadm.go:317] 		Once you have found the failing container, you can inspect its logs with:
	I0108 12:41:13.985250    6872 kubeadm.go:317] 		- 'docker logs CONTAINERID'
	I0108 12:41:13.985260    6872 kubeadm.go:317] 
	I0108 12:41:13.988378    6872 kubeadm.go:317] W0108 20:39:17.721599    3452 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0108 12:41:13.988452    6872 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0108 12:41:13.988566    6872 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 19.03
	I0108 12:41:13.988650    6872 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 12:41:13.988740    6872 kubeadm.go:317] W0108 20:39:18.972373    3452 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0108 12:41:13.988827    6872 kubeadm.go:317] W0108 20:39:18.973116    3452 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0108 12:41:13.988912    6872 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0108 12:41:13.988978    6872 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I0108 12:41:13.988995    6872 kubeadm.go:398] StartCluster complete in 3m54.034168221s
	I0108 12:41:13.989095    6872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 12:41:14.011956    6872 logs.go:274] 0 containers: []
	W0108 12:41:14.011970    6872 logs.go:276] No container was found matching "kube-apiserver"
	I0108 12:41:14.012053    6872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 12:41:14.035075    6872 logs.go:274] 0 containers: []
	W0108 12:41:14.035089    6872 logs.go:276] No container was found matching "etcd"
	I0108 12:41:14.035184    6872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 12:41:14.057359    6872 logs.go:274] 0 containers: []
	W0108 12:41:14.057372    6872 logs.go:276] No container was found matching "coredns"
	I0108 12:41:14.057453    6872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 12:41:14.080972    6872 logs.go:274] 0 containers: []
	W0108 12:41:14.080987    6872 logs.go:276] No container was found matching "kube-scheduler"
	I0108 12:41:14.081081    6872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 12:41:14.103989    6872 logs.go:274] 0 containers: []
	W0108 12:41:14.104002    6872 logs.go:276] No container was found matching "kube-proxy"
	I0108 12:41:14.104093    6872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 12:41:14.127572    6872 logs.go:274] 0 containers: []
	W0108 12:41:14.127586    6872 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 12:41:14.127675    6872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 12:41:14.150493    6872 logs.go:274] 0 containers: []
	W0108 12:41:14.150508    6872 logs.go:276] No container was found matching "storage-provisioner"
	I0108 12:41:14.150591    6872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 12:41:14.172731    6872 logs.go:274] 0 containers: []
	W0108 12:41:14.172745    6872 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 12:41:14.172753    6872 logs.go:123] Gathering logs for container status ...
	I0108 12:41:14.172760    6872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 12:41:16.225088    6872 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052340542s)
	I0108 12:41:16.225244    6872 logs.go:123] Gathering logs for kubelet ...
	I0108 12:41:16.225253    6872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 12:41:16.264127    6872 logs.go:123] Gathering logs for dmesg ...
	I0108 12:41:16.264141    6872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 12:41:16.277154    6872 logs.go:123] Gathering logs for describe nodes ...
	I0108 12:41:16.277168    6872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 12:41:16.329730    6872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 12:41:16.329742    6872 logs.go:123] Gathering logs for Docker ...
	I0108 12:41:16.329751    6872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	W0108 12:41:16.345256    6872 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0108 20:39:17.721599    3452 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0108 20:39:18.972373    3452 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0108 20:39:18.973116    3452 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0108 12:41:16.345280    6872 out.go:239] * 
	* 
	W0108 12:41:16.345405    6872 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0108 20:39:17.721599    3452 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0108 20:39:18.972373    3452 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0108 20:39:18.973116    3452 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0108 20:39:17.721599    3452 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0108 20:39:18.972373    3452 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0108 20:39:18.973116    3452 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0108 12:41:16.345426    6872 out.go:239] * 
	* 
	W0108 12:41:16.346035    6872 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 12:41:16.410910    6872 out.go:177] 
	W0108 12:41:16.455008    6872 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0108 20:39:17.721599    3452 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0108 20:39:18.972373    3452 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0108 20:39:18.973116    3452 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0108 20:39:17.721599    3452 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0108 20:39:18.972373    3452 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0108 20:39:18.973116    3452 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0108 12:41:16.455238    6872 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0108 12:41:16.455353    6872 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0108 12:41:16.512484    6872 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-123658 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (258.44s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.59s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-123658 addons enable ingress --alsologtostderr -v=5
E0108 12:41:38.761467    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-123658 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m29.134633678s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 12:41:16.661333    7188 out.go:296] Setting OutFile to fd 1 ...
	I0108 12:41:16.661611    7188 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 12:41:16.661617    7188 out.go:309] Setting ErrFile to fd 2...
	I0108 12:41:16.661621    7188 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 12:41:16.661732    7188 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-2761/.minikube/bin
	I0108 12:41:16.683805    7188 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0108 12:41:16.706268    7188 config.go:180] Loaded profile config "ingress-addon-legacy-123658": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0108 12:41:16.706305    7188 addons.go:65] Setting ingress=true in profile "ingress-addon-legacy-123658"
	I0108 12:41:16.706319    7188 addons.go:227] Setting addon ingress=true in "ingress-addon-legacy-123658"
	I0108 12:41:16.706923    7188 host.go:66] Checking if "ingress-addon-legacy-123658" exists ...
	I0108 12:41:16.708012    7188 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-123658 --format={{.State.Status}}
	I0108 12:41:16.787013    7188 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0108 12:41:16.809187    7188 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	I0108 12:41:16.830768    7188 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0108 12:41:16.851566    7188 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0108 12:41:16.872813    7188 addons.go:419] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0108 12:41:16.872836    7188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15613 bytes)
	I0108 12:41:16.872941    7188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-123658
	I0108 12:41:16.931818    7188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50561 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/ingress-addon-legacy-123658/id_rsa Username:docker}
	I0108 12:41:17.025532    7188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0108 12:41:17.076869    7188 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:41:17.076895    7188 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:41:17.353443    7188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0108 12:41:17.405982    7188 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:41:17.406003    7188 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:41:17.946609    7188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0108 12:41:18.002304    7188 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:41:18.002319    7188 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:41:18.659538    7188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0108 12:41:18.712671    7188 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:41:18.712691    7188 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:41:19.504144    7188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0108 12:41:19.555969    7188 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:41:19.555993    7188 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:41:20.727048    7188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0108 12:41:20.780373    7188 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:41:20.780389    7188 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:41:23.034266    7188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0108 12:41:23.087995    7188 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:41:23.088012    7188 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:41:24.699290    7188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0108 12:41:24.753529    7188 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:41:24.753547    7188 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:41:27.559268    7188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0108 12:41:27.613266    7188 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:41:27.613284    7188 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:41:31.440442    7188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0108 12:41:31.494774    7188 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:41:31.494800    7188 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:41:39.192803    7188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0108 12:41:39.246406    7188 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:41:39.246422    7188 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:41:53.883420    7188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0108 12:41:53.937655    7188 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:41:53.937672    7188 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:42:22.346272    7188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0108 12:42:22.400532    7188 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:42:22.400554    7188 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:42:45.570830    7188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0108 12:42:45.626012    7188 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:42:45.626045    7188 addons.go:457] Verifying addon ingress=true in "ingress-addon-legacy-123658"
	I0108 12:42:45.647636    7188 out.go:177] * Verifying ingress addon...
	I0108 12:42:45.669989    7188 out.go:177] 
	W0108 12:42:45.691934    7188 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-123658" does not exist: client config: context "ingress-addon-legacy-123658" does not exist]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-123658" does not exist: client config: context "ingress-addon-legacy-123658" does not exist]
	W0108 12:42:45.691961    7188 out.go:239] * 
	* 
	W0108 12:42:45.696016    7188 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 12:42:45.717320    7188 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-123658
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-123658:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eff04ae95ae9af754b53a15b33f910644d17dc22a513588e547d5322620ea27c",
	        "Created": "2023-01-08T20:37:14.837281659Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 40467,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-08T20:37:15.121132834Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/eff04ae95ae9af754b53a15b33f910644d17dc22a513588e547d5322620ea27c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eff04ae95ae9af754b53a15b33f910644d17dc22a513588e547d5322620ea27c/hostname",
	        "HostsPath": "/var/lib/docker/containers/eff04ae95ae9af754b53a15b33f910644d17dc22a513588e547d5322620ea27c/hosts",
	        "LogPath": "/var/lib/docker/containers/eff04ae95ae9af754b53a15b33f910644d17dc22a513588e547d5322620ea27c/eff04ae95ae9af754b53a15b33f910644d17dc22a513588e547d5322620ea27c-json.log",
	        "Name": "/ingress-addon-legacy-123658",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-123658:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-123658",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/64171bdf04d60656ec58ee344f1e2070bd3d456f8383cf16569375860d8c68e5-init/diff:/var/lib/docker/overlay2/cf478f0005761c12f45c53e8731191461bd51878189b802beb3f80527bc3582c/diff:/var/lib/docker/overlay2/50547848ed232979e0349fdf0641681247e43e6ddcd120dbefccdce45eba4793/diff:/var/lib/docker/overlay2/7a8415f97e49b013d35a8b27eaf2a6be470c2a985fcd6de4711cb0018f555a3d/diff:/var/lib/docker/overlay2/435dd0b905de8bd2d6b23782418e6d76b0957f55123fe106e3b62d08c0f3da13/diff:/var/lib/docker/overlay2/70ca2e846954d00d296abfcdcefb0db4959d8ce6650e54b1071b655f7c71c823/diff:/var/lib/docker/overlay2/62715d50ae74531df8ef33be95bc933c79334fbfa0ace0bad5efc678fb43d860/diff:/var/lib/docker/overlay2/857f757c27b37807332ef8a52061b2e02614567dadd8631c9414bcf1e51c7eb6/diff:/var/lib/docker/overlay2/d3d508987063e3e43530c93ff3bb9fc842f7f56e79f9babdb9a3608990dc911e/diff:/var/lib/docker/overlay2/b9307635c9b780f8ea6af04393e82329578be8ced22abd92633ac5912ce752d7/diff:/var/lib/docker/overlay2/ab3124
e34a60bd3d2f554d712f9db28fed57b9030105f996b2a40b6c5c68e6a3/diff:/var/lib/docker/overlay2/2664538922f7cea7eec3238db144935f7380d439e3aaf6611f7f6232515b6c70/diff:/var/lib/docker/overlay2/fcf4ff3c9f738d263ccde0d59a8f0bbbf77d5fe10a37a0b64782c90258c52f05/diff:/var/lib/docker/overlay2/9ebb5fb88ffad88aca62110ea1902a046eb8d27eab4d1b03380f2799a61190e4/diff:/var/lib/docker/overlay2/16c6977d1dcb3aef6968fa378be9d39da565962707fb1c2ebcc08741b3ebabb0/diff:/var/lib/docker/overlay2/4a1a615ba2290b96a2289b3709f9e4e2b7585a7880463549ed90c765c1cf364b/diff:/var/lib/docker/overlay2/8875d4ae4e008b8ed7a6c64b581bc9a7437e20bc59a10db038658c3c3abbd626/diff:/var/lib/docker/overlay2/a92bc2bed5e566a6a12e091f0b6adcc5120ec1a5a04a079614da38b8e08b4f4d/diff:/var/lib/docker/overlay2/507f4a1c4f60a4445244bd4611fbdebeda31c842886f650aff0c93fe1cbf551b/diff:/var/lib/docker/overlay2/4b6f57707d2af391e02b8fbab74a152c38778d850194db7c366c972d607c3683/diff:/var/lib/docker/overlay2/30f07cc70078d1a1064ae4c014017806ca9cab561445ba4999d279d77ab9efd9/diff:/var/lib/d
ocker/overlay2/a7ce66498ad28650a9c447ffdd1776688091a1f96a77ba104690bbd632828084/diff:/var/lib/docker/overlay2/375e879a1c9abf773aadafa9214b4cd6a5fa848c3521ded951069c1ef16d03c8/diff:/var/lib/docker/overlay2/dbf6bd39c4440680d1fb7dcfc66134acd119d818a0da224feea03b15985518ef/diff:/var/lib/docker/overlay2/f5247f50460095d94d94f10c8f29a1106915f3f694a40dbc0ff0a7494ceef2d6/diff:/var/lib/docker/overlay2/eca77ea4b87f19d3e4b6258b307c944a60d8a11e38e520715736d86cfcb0a340/diff:/var/lib/docker/overlay2/af8edadcadb813c9b8bcb395db5b7025128f75336edf043daf159e86115fa2d0/diff:/var/lib/docker/overlay2/82696f404a416ef0c49184f767d3a67d76997ca4b3ab9f2553ab364b9e902189/diff:/var/lib/docker/overlay2/aa5f3a92ab78aa13af6b0e4ca676e887e32b388ad037098956622b2bb2d64653/diff:/var/lib/docker/overlay2/3fd93bd37311284bcd588f06d2e1157fcae183e793e58b9e91af55526752251b/diff:/var/lib/docker/overlay2/5cac080397d4de235a72e46ee68fdd622d9fba1dbd60139a59881df7cb97cdd3/diff:/var/lib/docker/overlay2/1534f7a89f3f0459a57d2264ddb9c4b2e95b9348c6c3fb6839c3f2cd1aa
7009a/diff:/var/lib/docker/overlay2/0fa983ab9147631e9188574a597cbb1ada8bd69b4eff49391c9704d239988f73/diff:/var/lib/docker/overlay2/2ff1f973faf98b7d46648d22c4c0cb73675d5b3f37e6906c457a45823a29fe1e/diff:/var/lib/docker/overlay2/1d56ab53b6c377c5835e50d09effb1a1a727279cb8883e5d4cda8c35b4600695/diff:/var/lib/docker/overlay2/903da5933dc4be1a0f9e38defe40072a669562fc25c401b8b9a02def3b94bec6/diff:/var/lib/docker/overlay2/4be7777ae41ce96ae10877862b8954fa1ee593061f9647f30de2ccdd036bb452/diff:/var/lib/docker/overlay2/ae284268a6cd8a67190129d99bdb6a97d27c88bfe4536cbdf20bc356c6cb5ad4/diff:/var/lib/docker/overlay2/207f47b4e74ecca6010612742ebe5cd0c8363dd1634d58f37b9df57cefc063f2/diff:/var/lib/docker/overlay2/65d59701773a038dc5533dece8ebc52ebf3efc833e94c91c470d1f6593bdf196/diff:/var/lib/docker/overlay2/3ae8859886568a0e539b79f17ace58f390ab402b4428c45188c2587640d73f10/diff:/var/lib/docker/overlay2/bf63d45714e6f77ee9a5cf0fd198e479af953d7ea25a6f1f76633e63bd9b827f/diff:/var/lib/docker/overlay2/ac8c76daac6f3c2d9c8ceee7ed9defe04f1a31
f0271684f4258c0f634ed1fce1/diff:/var/lib/docker/overlay2/1cd45a0f7910466989a7434f8eec249f0e295b686baad0e434a2d34dd6e82a47/diff:/var/lib/docker/overlay2/d72980245e92027e64b68ee0fc086b48f102ea405ffbebfd8220036fdbe805d6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/64171bdf04d60656ec58ee344f1e2070bd3d456f8383cf16569375860d8c68e5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/64171bdf04d60656ec58ee344f1e2070bd3d456f8383cf16569375860d8c68e5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/64171bdf04d60656ec58ee344f1e2070bd3d456f8383cf16569375860d8c68e5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-123658",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-123658/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-123658",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-123658",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-123658",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1d7d5974bac8c7b60f4ee401a527919bb4ccfd2ba2be8b669e01f062cc7343dc",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50561"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50557"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50558"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50559"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50560"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1d7d5974bac8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-123658": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "eff04ae95ae9",
	                        "ingress-addon-legacy-123658"
	                    ],
	                    "NetworkID": "e4c01ae5bb084eb19a177d0f649e4583e328955b79d614e96991d88437e398fb",
	                    "EndpointID": "fe1a985da68485268ea2bdda760d131498ae483a9d4e02431accc2faab387b35",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-123658 -n ingress-addon-legacy-123658
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-123658 -n ingress-addon-legacy-123658: exit status 6 (391.182259ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 12:42:46.181745    7272 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-123658" does not appear in /Users/jenkins/minikube-integration/15565-2761/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-123658" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.59s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-123658 addons enable ingress-dns --alsologtostderr -v=5
E0108 12:43:00.681004    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-123658 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m29.069652496s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 12:42:46.247071    7282 out.go:296] Setting OutFile to fd 1 ...
	I0108 12:42:46.247419    7282 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 12:42:46.247428    7282 out.go:309] Setting ErrFile to fd 2...
	I0108 12:42:46.247432    7282 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 12:42:46.247535    7282 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-2761/.minikube/bin
	I0108 12:42:46.269490    7282 out.go:177] * ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0108 12:42:46.292018    7282 config.go:180] Loaded profile config "ingress-addon-legacy-123658": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0108 12:42:46.292049    7282 addons.go:65] Setting ingress-dns=true in profile "ingress-addon-legacy-123658"
	I0108 12:42:46.292063    7282 addons.go:227] Setting addon ingress-dns=true in "ingress-addon-legacy-123658"
	I0108 12:42:46.292629    7282 host.go:66] Checking if "ingress-addon-legacy-123658" exists ...
	I0108 12:42:46.293666    7282 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-123658 --format={{.State.Status}}
	I0108 12:42:46.372541    7282 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0108 12:42:46.394237    7282 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0108 12:42:46.416214    7282 addons.go:419] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0108 12:42:46.416256    7282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0108 12:42:46.416426    7282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-123658
	I0108 12:42:46.475472    7282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50561 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/ingress-addon-legacy-123658/id_rsa Username:docker}
	I0108 12:42:46.567339    7282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0108 12:42:46.618372    7282 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:42:46.618395    7282 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:42:46.896080    7282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0108 12:42:46.951611    7282 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:42:46.951635    7282 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:42:47.492468    7282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0108 12:42:47.547395    7282 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:42:47.547415    7282 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:42:48.204722    7282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0108 12:42:48.258366    7282 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:42:48.258387    7282 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:42:49.051210    7282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0108 12:42:49.104413    7282 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:42:49.104432    7282 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:42:50.275170    7282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0108 12:42:50.327757    7282 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:42:50.327781    7282 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:42:52.581223    7282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0108 12:42:52.633738    7282 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:42:52.633753    7282 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:42:54.246310    7282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0108 12:42:54.298983    7282 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:42:54.299001    7282 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:42:57.103614    7282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0108 12:42:57.157290    7282 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:42:57.157308    7282 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:43:00.984456    7282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0108 12:43:01.039594    7282 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:43:01.039609    7282 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:43:08.739211    7282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0108 12:43:08.792600    7282 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:43:08.792616    7282 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:43:23.430314    7282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0108 12:43:23.483446    7282 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:43:23.483463    7282 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:43:51.892033    7282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0108 12:43:51.945747    7282 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:43:51.945765    7282 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:44:15.116089    7282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0108 12:44:15.169109    7282 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0108 12:44:15.189878    7282 out.go:177] 
	W0108 12:44:15.211151    7282 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0108 12:44:15.211176    7282 out.go:239] * 
	* 
	W0108 12:44:15.215109    7282 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 12:44:15.236950    7282 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-123658
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-123658:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eff04ae95ae9af754b53a15b33f910644d17dc22a513588e547d5322620ea27c",
	        "Created": "2023-01-08T20:37:14.837281659Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 40467,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-08T20:37:15.121132834Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/eff04ae95ae9af754b53a15b33f910644d17dc22a513588e547d5322620ea27c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eff04ae95ae9af754b53a15b33f910644d17dc22a513588e547d5322620ea27c/hostname",
	        "HostsPath": "/var/lib/docker/containers/eff04ae95ae9af754b53a15b33f910644d17dc22a513588e547d5322620ea27c/hosts",
	        "LogPath": "/var/lib/docker/containers/eff04ae95ae9af754b53a15b33f910644d17dc22a513588e547d5322620ea27c/eff04ae95ae9af754b53a15b33f910644d17dc22a513588e547d5322620ea27c-json.log",
	        "Name": "/ingress-addon-legacy-123658",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-123658:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-123658",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/64171bdf04d60656ec58ee344f1e2070bd3d456f8383cf16569375860d8c68e5-init/diff:/var/lib/docker/overlay2/cf478f0005761c12f45c53e8731191461bd51878189b802beb3f80527bc3582c/diff:/var/lib/docker/overlay2/50547848ed232979e0349fdf0641681247e43e6ddcd120dbefccdce45eba4793/diff:/var/lib/docker/overlay2/7a8415f97e49b013d35a8b27eaf2a6be470c2a985fcd6de4711cb0018f555a3d/diff:/var/lib/docker/overlay2/435dd0b905de8bd2d6b23782418e6d76b0957f55123fe106e3b62d08c0f3da13/diff:/var/lib/docker/overlay2/70ca2e846954d00d296abfcdcefb0db4959d8ce6650e54b1071b655f7c71c823/diff:/var/lib/docker/overlay2/62715d50ae74531df8ef33be95bc933c79334fbfa0ace0bad5efc678fb43d860/diff:/var/lib/docker/overlay2/857f757c27b37807332ef8a52061b2e02614567dadd8631c9414bcf1e51c7eb6/diff:/var/lib/docker/overlay2/d3d508987063e3e43530c93ff3bb9fc842f7f56e79f9babdb9a3608990dc911e/diff:/var/lib/docker/overlay2/b9307635c9b780f8ea6af04393e82329578be8ced22abd92633ac5912ce752d7/diff:/var/lib/docker/overlay2/ab3124
e34a60bd3d2f554d712f9db28fed57b9030105f996b2a40b6c5c68e6a3/diff:/var/lib/docker/overlay2/2664538922f7cea7eec3238db144935f7380d439e3aaf6611f7f6232515b6c70/diff:/var/lib/docker/overlay2/fcf4ff3c9f738d263ccde0d59a8f0bbbf77d5fe10a37a0b64782c90258c52f05/diff:/var/lib/docker/overlay2/9ebb5fb88ffad88aca62110ea1902a046eb8d27eab4d1b03380f2799a61190e4/diff:/var/lib/docker/overlay2/16c6977d1dcb3aef6968fa378be9d39da565962707fb1c2ebcc08741b3ebabb0/diff:/var/lib/docker/overlay2/4a1a615ba2290b96a2289b3709f9e4e2b7585a7880463549ed90c765c1cf364b/diff:/var/lib/docker/overlay2/8875d4ae4e008b8ed7a6c64b581bc9a7437e20bc59a10db038658c3c3abbd626/diff:/var/lib/docker/overlay2/a92bc2bed5e566a6a12e091f0b6adcc5120ec1a5a04a079614da38b8e08b4f4d/diff:/var/lib/docker/overlay2/507f4a1c4f60a4445244bd4611fbdebeda31c842886f650aff0c93fe1cbf551b/diff:/var/lib/docker/overlay2/4b6f57707d2af391e02b8fbab74a152c38778d850194db7c366c972d607c3683/diff:/var/lib/docker/overlay2/30f07cc70078d1a1064ae4c014017806ca9cab561445ba4999d279d77ab9efd9/diff:/var/lib/d
ocker/overlay2/a7ce66498ad28650a9c447ffdd1776688091a1f96a77ba104690bbd632828084/diff:/var/lib/docker/overlay2/375e879a1c9abf773aadafa9214b4cd6a5fa848c3521ded951069c1ef16d03c8/diff:/var/lib/docker/overlay2/dbf6bd39c4440680d1fb7dcfc66134acd119d818a0da224feea03b15985518ef/diff:/var/lib/docker/overlay2/f5247f50460095d94d94f10c8f29a1106915f3f694a40dbc0ff0a7494ceef2d6/diff:/var/lib/docker/overlay2/eca77ea4b87f19d3e4b6258b307c944a60d8a11e38e520715736d86cfcb0a340/diff:/var/lib/docker/overlay2/af8edadcadb813c9b8bcb395db5b7025128f75336edf043daf159e86115fa2d0/diff:/var/lib/docker/overlay2/82696f404a416ef0c49184f767d3a67d76997ca4b3ab9f2553ab364b9e902189/diff:/var/lib/docker/overlay2/aa5f3a92ab78aa13af6b0e4ca676e887e32b388ad037098956622b2bb2d64653/diff:/var/lib/docker/overlay2/3fd93bd37311284bcd588f06d2e1157fcae183e793e58b9e91af55526752251b/diff:/var/lib/docker/overlay2/5cac080397d4de235a72e46ee68fdd622d9fba1dbd60139a59881df7cb97cdd3/diff:/var/lib/docker/overlay2/1534f7a89f3f0459a57d2264ddb9c4b2e95b9348c6c3fb6839c3f2cd1aa
7009a/diff:/var/lib/docker/overlay2/0fa983ab9147631e9188574a597cbb1ada8bd69b4eff49391c9704d239988f73/diff:/var/lib/docker/overlay2/2ff1f973faf98b7d46648d22c4c0cb73675d5b3f37e6906c457a45823a29fe1e/diff:/var/lib/docker/overlay2/1d56ab53b6c377c5835e50d09effb1a1a727279cb8883e5d4cda8c35b4600695/diff:/var/lib/docker/overlay2/903da5933dc4be1a0f9e38defe40072a669562fc25c401b8b9a02def3b94bec6/diff:/var/lib/docker/overlay2/4be7777ae41ce96ae10877862b8954fa1ee593061f9647f30de2ccdd036bb452/diff:/var/lib/docker/overlay2/ae284268a6cd8a67190129d99bdb6a97d27c88bfe4536cbdf20bc356c6cb5ad4/diff:/var/lib/docker/overlay2/207f47b4e74ecca6010612742ebe5cd0c8363dd1634d58f37b9df57cefc063f2/diff:/var/lib/docker/overlay2/65d59701773a038dc5533dece8ebc52ebf3efc833e94c91c470d1f6593bdf196/diff:/var/lib/docker/overlay2/3ae8859886568a0e539b79f17ace58f390ab402b4428c45188c2587640d73f10/diff:/var/lib/docker/overlay2/bf63d45714e6f77ee9a5cf0fd198e479af953d7ea25a6f1f76633e63bd9b827f/diff:/var/lib/docker/overlay2/ac8c76daac6f3c2d9c8ceee7ed9defe04f1a31
f0271684f4258c0f634ed1fce1/diff:/var/lib/docker/overlay2/1cd45a0f7910466989a7434f8eec249f0e295b686baad0e434a2d34dd6e82a47/diff:/var/lib/docker/overlay2/d72980245e92027e64b68ee0fc086b48f102ea405ffbebfd8220036fdbe805d6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/64171bdf04d60656ec58ee344f1e2070bd3d456f8383cf16569375860d8c68e5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/64171bdf04d60656ec58ee344f1e2070bd3d456f8383cf16569375860d8c68e5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/64171bdf04d60656ec58ee344f1e2070bd3d456f8383cf16569375860d8c68e5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-123658",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-123658/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-123658",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-123658",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-123658",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1d7d5974bac8c7b60f4ee401a527919bb4ccfd2ba2be8b669e01f062cc7343dc",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50561"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50557"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50558"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50559"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50560"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1d7d5974bac8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-123658": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "eff04ae95ae9",
	                        "ingress-addon-legacy-123658"
	                    ],
	                    "NetworkID": "e4c01ae5bb084eb19a177d0f649e4583e328955b79d614e96991d88437e398fb",
	                    "EndpointID": "fe1a985da68485268ea2bdda760d131498ae483a9d4e02431accc2faab387b35",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-123658 -n ingress-addon-legacy-123658
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-123658 -n ingress-addon-legacy-123658: exit status 6 (396.231907ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 12:44:15.706558    7366 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-123658" does not appear in /Users/jenkins/minikube-integration/15565-2761/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-123658" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.53s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.5s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:163: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-123658
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-123658:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eff04ae95ae9af754b53a15b33f910644d17dc22a513588e547d5322620ea27c",
	        "Created": "2023-01-08T20:37:14.837281659Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 40467,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-08T20:37:15.121132834Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/eff04ae95ae9af754b53a15b33f910644d17dc22a513588e547d5322620ea27c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eff04ae95ae9af754b53a15b33f910644d17dc22a513588e547d5322620ea27c/hostname",
	        "HostsPath": "/var/lib/docker/containers/eff04ae95ae9af754b53a15b33f910644d17dc22a513588e547d5322620ea27c/hosts",
	        "LogPath": "/var/lib/docker/containers/eff04ae95ae9af754b53a15b33f910644d17dc22a513588e547d5322620ea27c/eff04ae95ae9af754b53a15b33f910644d17dc22a513588e547d5322620ea27c-json.log",
	        "Name": "/ingress-addon-legacy-123658",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-123658:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-123658",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/64171bdf04d60656ec58ee344f1e2070bd3d456f8383cf16569375860d8c68e5-init/diff:/var/lib/docker/overlay2/cf478f0005761c12f45c53e8731191461bd51878189b802beb3f80527bc3582c/diff:/var/lib/docker/overlay2/50547848ed232979e0349fdf0641681247e43e6ddcd120dbefccdce45eba4793/diff:/var/lib/docker/overlay2/7a8415f97e49b013d35a8b27eaf2a6be470c2a985fcd6de4711cb0018f555a3d/diff:/var/lib/docker/overlay2/435dd0b905de8bd2d6b23782418e6d76b0957f55123fe106e3b62d08c0f3da13/diff:/var/lib/docker/overlay2/70ca2e846954d00d296abfcdcefb0db4959d8ce6650e54b1071b655f7c71c823/diff:/var/lib/docker/overlay2/62715d50ae74531df8ef33be95bc933c79334fbfa0ace0bad5efc678fb43d860/diff:/var/lib/docker/overlay2/857f757c27b37807332ef8a52061b2e02614567dadd8631c9414bcf1e51c7eb6/diff:/var/lib/docker/overlay2/d3d508987063e3e43530c93ff3bb9fc842f7f56e79f9babdb9a3608990dc911e/diff:/var/lib/docker/overlay2/b9307635c9b780f8ea6af04393e82329578be8ced22abd92633ac5912ce752d7/diff:/var/lib/docker/overlay2/ab3124
e34a60bd3d2f554d712f9db28fed57b9030105f996b2a40b6c5c68e6a3/diff:/var/lib/docker/overlay2/2664538922f7cea7eec3238db144935f7380d439e3aaf6611f7f6232515b6c70/diff:/var/lib/docker/overlay2/fcf4ff3c9f738d263ccde0d59a8f0bbbf77d5fe10a37a0b64782c90258c52f05/diff:/var/lib/docker/overlay2/9ebb5fb88ffad88aca62110ea1902a046eb8d27eab4d1b03380f2799a61190e4/diff:/var/lib/docker/overlay2/16c6977d1dcb3aef6968fa378be9d39da565962707fb1c2ebcc08741b3ebabb0/diff:/var/lib/docker/overlay2/4a1a615ba2290b96a2289b3709f9e4e2b7585a7880463549ed90c765c1cf364b/diff:/var/lib/docker/overlay2/8875d4ae4e008b8ed7a6c64b581bc9a7437e20bc59a10db038658c3c3abbd626/diff:/var/lib/docker/overlay2/a92bc2bed5e566a6a12e091f0b6adcc5120ec1a5a04a079614da38b8e08b4f4d/diff:/var/lib/docker/overlay2/507f4a1c4f60a4445244bd4611fbdebeda31c842886f650aff0c93fe1cbf551b/diff:/var/lib/docker/overlay2/4b6f57707d2af391e02b8fbab74a152c38778d850194db7c366c972d607c3683/diff:/var/lib/docker/overlay2/30f07cc70078d1a1064ae4c014017806ca9cab561445ba4999d279d77ab9efd9/diff:/var/lib/d
ocker/overlay2/a7ce66498ad28650a9c447ffdd1776688091a1f96a77ba104690bbd632828084/diff:/var/lib/docker/overlay2/375e879a1c9abf773aadafa9214b4cd6a5fa848c3521ded951069c1ef16d03c8/diff:/var/lib/docker/overlay2/dbf6bd39c4440680d1fb7dcfc66134acd119d818a0da224feea03b15985518ef/diff:/var/lib/docker/overlay2/f5247f50460095d94d94f10c8f29a1106915f3f694a40dbc0ff0a7494ceef2d6/diff:/var/lib/docker/overlay2/eca77ea4b87f19d3e4b6258b307c944a60d8a11e38e520715736d86cfcb0a340/diff:/var/lib/docker/overlay2/af8edadcadb813c9b8bcb395db5b7025128f75336edf043daf159e86115fa2d0/diff:/var/lib/docker/overlay2/82696f404a416ef0c49184f767d3a67d76997ca4b3ab9f2553ab364b9e902189/diff:/var/lib/docker/overlay2/aa5f3a92ab78aa13af6b0e4ca676e887e32b388ad037098956622b2bb2d64653/diff:/var/lib/docker/overlay2/3fd93bd37311284bcd588f06d2e1157fcae183e793e58b9e91af55526752251b/diff:/var/lib/docker/overlay2/5cac080397d4de235a72e46ee68fdd622d9fba1dbd60139a59881df7cb97cdd3/diff:/var/lib/docker/overlay2/1534f7a89f3f0459a57d2264ddb9c4b2e95b9348c6c3fb6839c3f2cd1aa
7009a/diff:/var/lib/docker/overlay2/0fa983ab9147631e9188574a597cbb1ada8bd69b4eff49391c9704d239988f73/diff:/var/lib/docker/overlay2/2ff1f973faf98b7d46648d22c4c0cb73675d5b3f37e6906c457a45823a29fe1e/diff:/var/lib/docker/overlay2/1d56ab53b6c377c5835e50d09effb1a1a727279cb8883e5d4cda8c35b4600695/diff:/var/lib/docker/overlay2/903da5933dc4be1a0f9e38defe40072a669562fc25c401b8b9a02def3b94bec6/diff:/var/lib/docker/overlay2/4be7777ae41ce96ae10877862b8954fa1ee593061f9647f30de2ccdd036bb452/diff:/var/lib/docker/overlay2/ae284268a6cd8a67190129d99bdb6a97d27c88bfe4536cbdf20bc356c6cb5ad4/diff:/var/lib/docker/overlay2/207f47b4e74ecca6010612742ebe5cd0c8363dd1634d58f37b9df57cefc063f2/diff:/var/lib/docker/overlay2/65d59701773a038dc5533dece8ebc52ebf3efc833e94c91c470d1f6593bdf196/diff:/var/lib/docker/overlay2/3ae8859886568a0e539b79f17ace58f390ab402b4428c45188c2587640d73f10/diff:/var/lib/docker/overlay2/bf63d45714e6f77ee9a5cf0fd198e479af953d7ea25a6f1f76633e63bd9b827f/diff:/var/lib/docker/overlay2/ac8c76daac6f3c2d9c8ceee7ed9defe04f1a31
f0271684f4258c0f634ed1fce1/diff:/var/lib/docker/overlay2/1cd45a0f7910466989a7434f8eec249f0e295b686baad0e434a2d34dd6e82a47/diff:/var/lib/docker/overlay2/d72980245e92027e64b68ee0fc086b48f102ea405ffbebfd8220036fdbe805d6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/64171bdf04d60656ec58ee344f1e2070bd3d456f8383cf16569375860d8c68e5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/64171bdf04d60656ec58ee344f1e2070bd3d456f8383cf16569375860d8c68e5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/64171bdf04d60656ec58ee344f1e2070bd3d456f8383cf16569375860d8c68e5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-123658",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-123658/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-123658",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-123658",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-123658",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1d7d5974bac8c7b60f4ee401a527919bb4ccfd2ba2be8b669e01f062cc7343dc",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50561"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50557"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50558"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50559"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50560"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1d7d5974bac8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-123658": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "eff04ae95ae9",
	                        "ingress-addon-legacy-123658"
	                    ],
	                    "NetworkID": "e4c01ae5bb084eb19a177d0f649e4583e328955b79d614e96991d88437e398fb",
	                    "EndpointID": "fe1a985da68485268ea2bdda760d131498ae483a9d4e02431accc2faab387b35",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-123658 -n ingress-addon-legacy-123658
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-123658 -n ingress-addon-legacy-123658: exit status 6 (442.559481ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 12:44:16.208521    7378 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-123658" does not appear in /Users/jenkins/minikube-integration/15565-2761/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-123658" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.50s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (245.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-124908
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-124908
multinode_test.go:288: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-124908: (36.66592132s)
multinode_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-124908 --wait=true -v=8 --alsologtostderr
E0108 12:54:59.580347    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
E0108 12:55:16.866878    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
multinode_test.go:293: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-124908 --wait=true -v=8 --alsologtostderr: exit status 80 (3m23.543958401s)

                                                
                                                
-- stdout --
	* [multinode-124908] minikube v1.28.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2761/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-124908 in cluster multinode-124908
	* Pulling base image ...
	* Restarting existing docker container for "multinode-124908" ...
	* Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	* Starting worker node multinode-124908-m02 in cluster multinode-124908
	* Pulling base image ...
	* Restarting existing docker container for "multinode-124908-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.58.2
	* Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
	  - env NO_PROXY=192.168.58.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 12:52:48.476511   10230 out.go:296] Setting OutFile to fd 1 ...
	I0108 12:52:48.476690   10230 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 12:52:48.476695   10230 out.go:309] Setting ErrFile to fd 2...
	I0108 12:52:48.476699   10230 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 12:52:48.476805   10230 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-2761/.minikube/bin
	I0108 12:52:48.477282   10230 out.go:303] Setting JSON to false
	I0108 12:52:48.496851   10230 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":3141,"bootTime":1673208027,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0108 12:52:48.496933   10230 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0108 12:52:48.518863   10230 out.go:177] * [multinode-124908] minikube v1.28.0 on Darwin 13.0.1
	I0108 12:52:48.562685   10230 notify.go:220] Checking for updates...
	I0108 12:52:48.584492   10230 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 12:52:48.605868   10230 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2761/kubeconfig
	I0108 12:52:48.627742   10230 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0108 12:52:48.649564   10230 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 12:52:48.670855   10230 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube
	I0108 12:52:48.692830   10230 config.go:180] Loaded profile config "multinode-124908": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 12:52:48.692882   10230 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 12:52:48.752565   10230 docker.go:137] docker version: linux-20.10.21
	I0108 12:52:48.752702   10230 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 12:52:48.893190   10230 info.go:266] docker info: {ID:5ZGC:6Z7C:CMVQ:QZDS:ZWTS:M4CQ:373D:RAJX:R4QB:QI3D:27UQ:FR5B Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:47 SystemTime:2023-01-08 20:52:48.802495891 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0108 12:52:48.915254   10230 out.go:177] * Using the docker driver based on existing profile
	I0108 12:52:48.936897   10230 start.go:294] selected driver: docker
	I0108 12:52:48.936925   10230 start.go:838] validating driver "docker" against &{Name:multinode-124908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-124908 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false
logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 12:52:48.937144   10230 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 12:52:48.937405   10230 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 12:52:49.080084   10230 info.go:266] docker info: {ID:5ZGC:6Z7C:CMVQ:QZDS:ZWTS:M4CQ:373D:RAJX:R4QB:QI3D:27UQ:FR5B Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:47 SystemTime:2023-01-08 20:52:48.989054771 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0108 12:52:49.082593   10230 start_flags.go:910] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 12:52:49.082624   10230 cni.go:95] Creating CNI manager for ""
	I0108 12:52:49.082633   10230 cni.go:156] 3 nodes found, recommending kindnet
	I0108 12:52:49.082649   10230 start_flags.go:317] config:
	{Name:multinode-124908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-124908 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false
nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 12:52:49.126296   10230 out.go:177] * Starting control plane node multinode-124908 in cluster multinode-124908
	I0108 12:52:49.147504   10230 cache.go:120] Beginning downloading kic base image for docker with docker
	I0108 12:52:49.169447   10230 out.go:177] * Pulling base image ...
	I0108 12:52:49.212470   10230 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0108 12:52:49.212524   10230 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 12:52:49.212576   10230 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I0108 12:52:49.212608   10230 cache.go:57] Caching tarball of preloaded images
	I0108 12:52:49.212817   10230 preload.go:174] Found /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 12:52:49.212842   10230 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I0108 12:52:49.213872   10230 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/config.json ...
	I0108 12:52:49.269073   10230 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 12:52:49.269089   10230 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 12:52:49.269136   10230 cache.go:193] Successfully downloaded all kic artifacts
	I0108 12:52:49.269193   10230 start.go:364] acquiring machines lock for multinode-124908: {Name:mk965de3adbf36f4b9fc247c2c9d993fbcc7d3eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 12:52:49.269287   10230 start.go:368] acquired machines lock for "multinode-124908" in 72.18µs
	I0108 12:52:49.269311   10230 start.go:96] Skipping create...Using existing machine configuration
	I0108 12:52:49.269319   10230 fix.go:55] fixHost starting: 
	I0108 12:52:49.269569   10230 cli_runner.go:164] Run: docker container inspect multinode-124908 --format={{.State.Status}}
	I0108 12:52:49.325214   10230 fix.go:103] recreateIfNeeded on multinode-124908: state=Stopped err=<nil>
	W0108 12:52:49.325247   10230 fix.go:129] unexpected machine state, will restart: <nil>
	I0108 12:52:49.368007   10230 out.go:177] * Restarting existing docker container for "multinode-124908" ...
	I0108 12:52:49.390173   10230 cli_runner.go:164] Run: docker start multinode-124908
	I0108 12:52:49.731447   10230 cli_runner.go:164] Run: docker container inspect multinode-124908 --format={{.State.Status}}
	I0108 12:52:49.792492   10230 kic.go:415] container "multinode-124908" state is running.
	I0108 12:52:49.793109   10230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-124908
	I0108 12:52:49.856775   10230 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/config.json ...
	I0108 12:52:49.857472   10230 machine.go:88] provisioning docker machine ...
	I0108 12:52:49.857522   10230 ubuntu.go:169] provisioning hostname "multinode-124908"
	I0108 12:52:49.857646   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908
	I0108 12:52:49.928096   10230 main.go:134] libmachine: Using SSH client type: native
	I0108 12:52:49.928348   10230 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51400 <nil> <nil>}
	I0108 12:52:49.928364   10230 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-124908 && echo "multinode-124908" | sudo tee /etc/hostname
	I0108 12:52:50.068101   10230 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-124908
	
	I0108 12:52:50.068246   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908
	I0108 12:52:50.132590   10230 main.go:134] libmachine: Using SSH client type: native
	I0108 12:52:50.132752   10230 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51400 <nil> <nil>}
	I0108 12:52:50.132766   10230 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-124908' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-124908/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-124908' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 12:52:50.251494   10230 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 12:52:50.251517   10230 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-2761/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-2761/.minikube}
	I0108 12:52:50.251543   10230 ubuntu.go:177] setting up certificates
	I0108 12:52:50.251552   10230 provision.go:83] configureAuth start
	I0108 12:52:50.251650   10230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-124908
	I0108 12:52:50.313533   10230 provision.go:138] copyHostCerts
	I0108 12:52:50.313583   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem
	I0108 12:52:50.313649   10230 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem, removing ...
	I0108 12:52:50.313658   10230 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem
	I0108 12:52:50.313785   10230 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem (1082 bytes)
	I0108 12:52:50.313970   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem
	I0108 12:52:50.314016   10230 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem, removing ...
	I0108 12:52:50.314021   10230 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem
	I0108 12:52:50.314085   10230 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem (1123 bytes)
	I0108 12:52:50.314205   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem
	I0108 12:52:50.314239   10230 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem, removing ...
	I0108 12:52:50.314244   10230 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem
	I0108 12:52:50.314307   10230 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem (1675 bytes)
	I0108 12:52:50.314434   10230 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem org=jenkins.multinode-124908 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-124908]
	I0108 12:52:50.380198   10230 provision.go:172] copyRemoteCerts
	I0108 12:52:50.380286   10230 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 12:52:50.380350   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908
	I0108 12:52:50.444096   10230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51400 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/multinode-124908/id_rsa Username:docker}
	I0108 12:52:50.531821   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 12:52:50.531929   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 12:52:50.552934   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 12:52:50.553022   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0108 12:52:50.572666   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 12:52:50.572782   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 12:52:50.592903   10230 provision.go:86] duration metric: configureAuth took 341.34064ms
	I0108 12:52:50.592919   10230 ubuntu.go:193] setting minikube options for container-runtime
	I0108 12:52:50.593116   10230 config.go:180] Loaded profile config "multinode-124908": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 12:52:50.593194   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908
	I0108 12:52:50.654868   10230 main.go:134] libmachine: Using SSH client type: native
	I0108 12:52:50.655037   10230 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51400 <nil> <nil>}
	I0108 12:52:50.655047   10230 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0108 12:52:50.773475   10230 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0108 12:52:50.773492   10230 ubuntu.go:71] root file system type: overlay
	I0108 12:52:50.773669   10230 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0108 12:52:50.773794   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908
	I0108 12:52:50.837942   10230 main.go:134] libmachine: Using SSH client type: native
	I0108 12:52:50.838110   10230 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51400 <nil> <nil>}
	I0108 12:52:50.838158   10230 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0108 12:52:50.963696   10230 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0108 12:52:50.963827   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908
	I0108 12:52:51.085008   10230 main.go:134] libmachine: Using SSH client type: native
	I0108 12:52:51.085170   10230 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51400 <nil> <nil>}
	I0108 12:52:51.085184   10230 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0108 12:52:51.209125   10230 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 12:52:51.209142   10230 machine.go:91] provisioned docker machine in 1.351654992s
	I0108 12:52:51.209153   10230 start.go:300] post-start starting for "multinode-124908" (driver="docker")
	I0108 12:52:51.209159   10230 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 12:52:51.209245   10230 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 12:52:51.209315   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908
	I0108 12:52:51.266923   10230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51400 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/multinode-124908/id_rsa Username:docker}
	I0108 12:52:51.354711   10230 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 12:52:51.358249   10230 command_runner.go:130] > NAME="Ubuntu"
	I0108 12:52:51.358259   10230 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0108 12:52:51.358262   10230 command_runner.go:130] > ID=ubuntu
	I0108 12:52:51.358266   10230 command_runner.go:130] > ID_LIKE=debian
	I0108 12:52:51.358270   10230 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0108 12:52:51.358274   10230 command_runner.go:130] > VERSION_ID="20.04"
	I0108 12:52:51.358278   10230 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0108 12:52:51.358283   10230 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0108 12:52:51.358287   10230 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0108 12:52:51.358297   10230 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0108 12:52:51.358301   10230 command_runner.go:130] > VERSION_CODENAME=focal
	I0108 12:52:51.358313   10230 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0108 12:52:51.358361   10230 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 12:52:51.358373   10230 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 12:52:51.358380   10230 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 12:52:51.358384   10230 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 12:52:51.358397   10230 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2761/.minikube/addons for local assets ...
	I0108 12:52:51.358486   10230 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2761/.minikube/files for local assets ...
	I0108 12:52:51.358651   10230 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem -> 40832.pem in /etc/ssl/certs
	I0108 12:52:51.358658   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem -> /etc/ssl/certs/40832.pem
	I0108 12:52:51.358838   10230 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 12:52:51.366223   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem --> /etc/ssl/certs/40832.pem (1708 bytes)
	I0108 12:52:51.383155   10230 start.go:303] post-start completed in 173.994497ms
	I0108 12:52:51.383248   10230 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 12:52:51.383323   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908
	I0108 12:52:51.439260   10230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51400 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/multinode-124908/id_rsa Username:docker}
	I0108 12:52:51.523964   10230 command_runner.go:130] > 12%
	I0108 12:52:51.524042   10230 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 12:52:51.528589   10230 command_runner.go:130] > 49G
	I0108 12:52:51.528971   10230 fix.go:57] fixHost completed within 2.259674652s
	I0108 12:52:51.528983   10230 start.go:83] releasing machines lock for "multinode-124908", held for 2.259712111s
	I0108 12:52:51.529095   10230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-124908
	I0108 12:52:51.585931   10230 ssh_runner.go:195] Run: cat /version.json
	I0108 12:52:51.585962   10230 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 12:52:51.586003   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908
	I0108 12:52:51.586036   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908
	I0108 12:52:51.647239   10230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51400 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/multinode-124908/id_rsa Username:docker}
	I0108 12:52:51.647408   10230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51400 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/multinode-124908/id_rsa Username:docker}
	I0108 12:52:51.796032   10230 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0108 12:52:51.796099   10230 command_runner.go:130] > {"iso_version": "v1.28.0-1668700269-15235", "kicbase_version": "v0.0.36-1668787669-15272", "minikube_version": "v1.28.0", "commit": "c883d3041e11322fb5c977f082b70bf31015848d"}
	I0108 12:52:51.796257   10230 ssh_runner.go:195] Run: systemctl --version
	I0108 12:52:51.801440   10230 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.18)
	I0108 12:52:51.801458   10230 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0108 12:52:51.801581   10230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0108 12:52:51.809034   10230 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I0108 12:52:51.821931   10230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 12:52:51.885088   10230 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0108 12:52:51.969615   10230 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0108 12:52:51.979408   10230 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0108 12:52:51.979518   10230 command_runner.go:130] > [Unit]
	I0108 12:52:51.979528   10230 command_runner.go:130] > Description=Docker Application Container Engine
	I0108 12:52:51.979533   10230 command_runner.go:130] > Documentation=https://docs.docker.com
	I0108 12:52:51.979549   10230 command_runner.go:130] > BindsTo=containerd.service
	I0108 12:52:51.979554   10230 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0108 12:52:51.979558   10230 command_runner.go:130] > Wants=network-online.target
	I0108 12:52:51.979562   10230 command_runner.go:130] > Requires=docker.socket
	I0108 12:52:51.979566   10230 command_runner.go:130] > StartLimitBurst=3
	I0108 12:52:51.979569   10230 command_runner.go:130] > StartLimitIntervalSec=60
	I0108 12:52:51.979572   10230 command_runner.go:130] > [Service]
	I0108 12:52:51.979576   10230 command_runner.go:130] > Type=notify
	I0108 12:52:51.979579   10230 command_runner.go:130] > Restart=on-failure
	I0108 12:52:51.979585   10230 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0108 12:52:51.979596   10230 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0108 12:52:51.979603   10230 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0108 12:52:51.979608   10230 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0108 12:52:51.979614   10230 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0108 12:52:51.979622   10230 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0108 12:52:51.979629   10230 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0108 12:52:51.979644   10230 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0108 12:52:51.979650   10230 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0108 12:52:51.979662   10230 command_runner.go:130] > ExecStart=
	I0108 12:52:51.979685   10230 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0108 12:52:51.979698   10230 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0108 12:52:51.979704   10230 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0108 12:52:51.979710   10230 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0108 12:52:51.979716   10230 command_runner.go:130] > LimitNOFILE=infinity
	I0108 12:52:51.979720   10230 command_runner.go:130] > LimitNPROC=infinity
	I0108 12:52:51.979724   10230 command_runner.go:130] > LimitCORE=infinity
	I0108 12:52:51.979734   10230 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0108 12:52:51.979742   10230 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0108 12:52:51.979746   10230 command_runner.go:130] > TasksMax=infinity
	I0108 12:52:51.979750   10230 command_runner.go:130] > TimeoutStartSec=0
	I0108 12:52:51.979758   10230 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0108 12:52:51.979764   10230 command_runner.go:130] > Delegate=yes
	I0108 12:52:51.979768   10230 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0108 12:52:51.979772   10230 command_runner.go:130] > KillMode=process
	I0108 12:52:51.979782   10230 command_runner.go:130] > [Install]
	I0108 12:52:51.979788   10230 command_runner.go:130] > WantedBy=multi-user.target
	I0108 12:52:51.980188   10230 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0108 12:52:51.980262   10230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 12:52:51.990190   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 12:52:52.002251   10230 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0108 12:52:52.002263   10230 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0108 12:52:52.003132   10230 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0108 12:52:52.069656   10230 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0108 12:52:52.140848   10230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 12:52:52.205600   10230 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 12:52:52.445232   10230 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0108 12:52:52.518548   10230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 12:52:52.581781   10230 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0108 12:52:52.591462   10230 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0108 12:52:52.591569   10230 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0108 12:52:52.595434   10230 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0108 12:52:52.595444   10230 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0108 12:52:52.595451   10230 command_runner.go:130] > Device: 96h/150d	Inode: 117         Links: 1
	I0108 12:52:52.595459   10230 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0108 12:52:52.595466   10230 command_runner.go:130] > Access: 2023-01-08 20:52:51.893693381 +0000
	I0108 12:52:52.595478   10230 command_runner.go:130] > Modify: 2023-01-08 20:52:51.893693381 +0000
	I0108 12:52:52.595483   10230 command_runner.go:130] > Change: 2023-01-08 20:52:51.894693381 +0000
	I0108 12:52:52.595486   10230 command_runner.go:130] >  Birth: -
	I0108 12:52:52.595505   10230 start.go:472] Will wait 60s for crictl version
	I0108 12:52:52.595557   10230 ssh_runner.go:195] Run: sudo crictl version
	I0108 12:52:52.623617   10230 command_runner.go:130] > Version:  0.1.0
	I0108 12:52:52.623629   10230 command_runner.go:130] > RuntimeName:  docker
	I0108 12:52:52.623633   10230 command_runner.go:130] > RuntimeVersion:  20.10.21
	I0108 12:52:52.623638   10230 command_runner.go:130] > RuntimeApiVersion:  1.41.0
	I0108 12:52:52.625732   10230 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.21
	RuntimeApiVersion:  1.41.0
	I0108 12:52:52.625831   10230 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 12:52:52.652816   10230 command_runner.go:130] > 20.10.21
	I0108 12:52:52.655127   10230 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 12:52:52.682770   10230 command_runner.go:130] > 20.10.21
	I0108 12:52:52.728662   10230 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
	I0108 12:52:52.728927   10230 cli_runner.go:164] Run: docker exec -t multinode-124908 dig +short host.docker.internal
	I0108 12:52:52.843621   10230 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0108 12:52:52.843752   10230 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0108 12:52:52.848044   10230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 12:52:52.857807   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-124908
	I0108 12:52:52.914657   10230 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0108 12:52:52.914751   10230 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 12:52:52.936515   10230 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.25.3
	I0108 12:52:52.936529   10230 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.25.3
	I0108 12:52:52.936533   10230 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.25.3
	I0108 12:52:52.936541   10230 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.25.3
	I0108 12:52:52.936545   10230 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I0108 12:52:52.936550   10230 command_runner.go:130] > registry.k8s.io/pause:3.8
	I0108 12:52:52.936558   10230 command_runner.go:130] > registry.k8s.io/etcd:3.5.4-0
	I0108 12:52:52.936566   10230 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0108 12:52:52.936570   10230 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I0108 12:52:52.936574   10230 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 12:52:52.936578   10230 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0108 12:52:52.938706   10230 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0108 12:52:52.938724   10230 docker.go:543] Images already preloaded, skipping extraction
	I0108 12:52:52.938830   10230 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 12:52:52.961775   10230 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.25.3
	I0108 12:52:52.961787   10230 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.25.3
	I0108 12:52:52.961792   10230 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.25.3
	I0108 12:52:52.961796   10230 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.25.3
	I0108 12:52:52.961800   10230 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I0108 12:52:52.961805   10230 command_runner.go:130] > registry.k8s.io/pause:3.8
	I0108 12:52:52.961808   10230 command_runner.go:130] > registry.k8s.io/etcd:3.5.4-0
	I0108 12:52:52.961812   10230 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0108 12:52:52.961816   10230 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I0108 12:52:52.961821   10230 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 12:52:52.961826   10230 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0108 12:52:52.963994   10230 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0108 12:52:52.964012   10230 cache_images.go:84] Images are preloaded, skipping loading
	I0108 12:52:52.964110   10230 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0108 12:52:53.031049   10230 command_runner.go:130] > systemd
	I0108 12:52:53.033769   10230 cni.go:95] Creating CNI manager for ""
	I0108 12:52:53.033783   10230 cni.go:156] 3 nodes found, recommending kindnet
	I0108 12:52:53.033799   10230 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 12:52:53.033811   10230 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-124908 NodeName:multinode-124908 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 12:52:53.033919   10230 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-124908"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 12:52:53.033992   10230 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-124908 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:multinode-124908 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 12:52:53.034066   10230 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0108 12:52:53.041267   10230 command_runner.go:130] > kubeadm
	I0108 12:52:53.041276   10230 command_runner.go:130] > kubectl
	I0108 12:52:53.041280   10230 command_runner.go:130] > kubelet
	I0108 12:52:53.041935   10230 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 12:52:53.041998   10230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 12:52:53.049289   10230 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (478 bytes)
	I0108 12:52:53.061963   10230 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 12:52:53.074600   10230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2038 bytes)
	I0108 12:52:53.087393   10230 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0108 12:52:53.091268   10230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 12:52:53.101056   10230 certs.go:54] Setting up /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908 for IP: 192.168.58.2
	I0108 12:52:53.101174   10230 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.key
	I0108 12:52:53.101232   10230 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.key
	I0108 12:52:53.101320   10230 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/client.key
	I0108 12:52:53.101402   10230 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/apiserver.key.cee25041
	I0108 12:52:53.101467   10230 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/proxy-client.key
	I0108 12:52:53.101474   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0108 12:52:53.101504   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0108 12:52:53.101532   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0108 12:52:53.101555   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0108 12:52:53.101577   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 12:52:53.101599   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 12:52:53.101620   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 12:52:53.101654   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 12:52:53.101777   10230 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083.pem (1338 bytes)
	W0108 12:52:53.101816   10230 certs.go:384] ignoring /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083_empty.pem, impossibly tiny 0 bytes
	I0108 12:52:53.101828   10230 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 12:52:53.101861   10230 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem (1082 bytes)
	I0108 12:52:53.101897   10230 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem (1123 bytes)
	I0108 12:52:53.101932   10230 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem (1675 bytes)
	I0108 12:52:53.102010   10230 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem (1708 bytes)
	I0108 12:52:53.102045   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem -> /usr/share/ca-certificates/40832.pem
	I0108 12:52:53.102069   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 12:52:53.102091   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083.pem -> /usr/share/ca-certificates/4083.pem
	I0108 12:52:53.102576   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 12:52:53.119843   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 12:52:53.136878   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 12:52:53.154404   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 12:52:53.171856   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 12:52:53.188984   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 12:52:53.205781   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 12:52:53.223289   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 12:52:53.240581   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem --> /usr/share/ca-certificates/40832.pem (1708 bytes)
	I0108 12:52:53.258415   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 12:52:53.275736   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083.pem --> /usr/share/ca-certificates/4083.pem (1338 bytes)
	I0108 12:52:53.292619   10230 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 12:52:53.305384   10230 ssh_runner.go:195] Run: openssl version
	I0108 12:52:53.310770   10230 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0108 12:52:53.310904   10230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 12:52:53.319002   10230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 12:52:53.322950   10230 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 20:27 /usr/share/ca-certificates/minikubeCA.pem
	I0108 12:52:53.322968   10230 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:27 /usr/share/ca-certificates/minikubeCA.pem
	I0108 12:52:53.323014   10230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 12:52:53.328167   10230 command_runner.go:130] > b5213941
	I0108 12:52:53.328524   10230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 12:52:53.336219   10230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4083.pem && ln -fs /usr/share/ca-certificates/4083.pem /etc/ssl/certs/4083.pem"
	I0108 12:52:53.344122   10230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4083.pem
	I0108 12:52:53.348309   10230 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 20:32 /usr/share/ca-certificates/4083.pem
	I0108 12:52:53.348373   10230 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:32 /usr/share/ca-certificates/4083.pem
	I0108 12:52:53.348430   10230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4083.pem
	I0108 12:52:53.353404   10230 command_runner.go:130] > 51391683
	I0108 12:52:53.353799   10230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4083.pem /etc/ssl/certs/51391683.0"
	I0108 12:52:53.361562   10230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/40832.pem && ln -fs /usr/share/ca-certificates/40832.pem /etc/ssl/certs/40832.pem"
	I0108 12:52:53.369417   10230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/40832.pem
	I0108 12:52:53.373480   10230 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 20:32 /usr/share/ca-certificates/40832.pem
	I0108 12:52:53.373578   10230 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:32 /usr/share/ca-certificates/40832.pem
	I0108 12:52:53.373631   10230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/40832.pem
	I0108 12:52:53.378641   10230 command_runner.go:130] > 3ec20f2e
	I0108 12:52:53.379054   10230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/40832.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 12:52:53.386797   10230 kubeadm.go:396] StartCluster: {Name:multinode-124908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-124908 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:fals
e metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 12:52:53.386940   10230 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 12:52:53.409759   10230 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 12:52:53.417076   10230 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0108 12:52:53.417086   10230 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0108 12:52:53.417091   10230 command_runner.go:130] > /var/lib/minikube/etcd:
	I0108 12:52:53.417094   10230 command_runner.go:130] > member
	I0108 12:52:53.417745   10230 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0108 12:52:53.417764   10230 kubeadm.go:627] restartCluster start
	I0108 12:52:53.417821   10230 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 12:52:53.424840   10230 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:53.424924   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-124908
	I0108 12:52:53.505497   10230 kubeconfig.go:135] verify returned: extract IP: "multinode-124908" does not appear in /Users/jenkins/minikube-integration/15565-2761/kubeconfig
	I0108 12:52:53.505586   10230 kubeconfig.go:146] "multinode-124908" context is missing from /Users/jenkins/minikube-integration/15565-2761/kubeconfig - will repair!
	I0108 12:52:53.505820   10230 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/kubeconfig: {Name:mk71550ab701dee908d8134473648649a6392238 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 12:52:53.506248   10230 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15565-2761/kubeconfig
	I0108 12:52:53.506470   10230 kapi.go:59] client config for multinode-124908: &rest.Config{Host:"https://127.0.0.1:51399", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 12:52:53.506824   10230 cert_rotation.go:137] Starting client certificate rotation controller
	I0108 12:52:53.507014   10230 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 12:52:53.514978   10230 api_server.go:165] Checking apiserver status ...
	I0108 12:52:53.515042   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 12:52:53.524027   10230 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:53.726136   10230 api_server.go:165] Checking apiserver status ...
	I0108 12:52:53.726313   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 12:52:53.737412   10230 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:53.924812   10230 api_server.go:165] Checking apiserver status ...
	I0108 12:52:53.924981   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 12:52:53.935992   10230 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:54.125324   10230 api_server.go:165] Checking apiserver status ...
	I0108 12:52:54.125467   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 12:52:54.136333   10230 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:54.326135   10230 api_server.go:165] Checking apiserver status ...
	I0108 12:52:54.326315   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 12:52:54.337453   10230 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:54.524413   10230 api_server.go:165] Checking apiserver status ...
	I0108 12:52:54.524541   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 12:52:54.535379   10230 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:54.725393   10230 api_server.go:165] Checking apiserver status ...
	I0108 12:52:54.725598   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 12:52:54.737223   10230 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:54.926217   10230 api_server.go:165] Checking apiserver status ...
	I0108 12:52:54.926369   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 12:52:54.937621   10230 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:55.126122   10230 api_server.go:165] Checking apiserver status ...
	I0108 12:52:55.126306   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 12:52:55.137248   10230 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:55.324353   10230 api_server.go:165] Checking apiserver status ...
	I0108 12:52:55.324535   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 12:52:55.335906   10230 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:55.524744   10230 api_server.go:165] Checking apiserver status ...
	I0108 12:52:55.524921   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 12:52:55.535972   10230 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:55.726145   10230 api_server.go:165] Checking apiserver status ...
	I0108 12:52:55.726307   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 12:52:55.737509   10230 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:55.926078   10230 api_server.go:165] Checking apiserver status ...
	I0108 12:52:55.926195   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 12:52:55.937217   10230 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:56.125041   10230 api_server.go:165] Checking apiserver status ...
	I0108 12:52:56.125167   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 12:52:56.136182   10230 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:56.324879   10230 api_server.go:165] Checking apiserver status ...
	I0108 12:52:56.325061   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 12:52:56.336205   10230 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:56.525915   10230 api_server.go:165] Checking apiserver status ...
	I0108 12:52:56.526093   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 12:52:56.537187   10230 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:56.537198   10230 api_server.go:165] Checking apiserver status ...
	I0108 12:52:56.537254   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 12:52:56.545669   10230 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:56.545683   10230 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0108 12:52:56.545691   10230 kubeadm.go:1114] stopping kube-system containers ...
	I0108 12:52:56.545773   10230 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 12:52:56.569982   10230 command_runner.go:130] > 102afbd16ebe
	I0108 12:52:56.569993   10230 command_runner.go:130] > 0fdc50ce7b7b
	I0108 12:52:56.569997   10230 command_runner.go:130] > 87704622b4c0
	I0108 12:52:56.570000   10230 command_runner.go:130] > bec02388b605
	I0108 12:52:56.570004   10230 command_runner.go:130] > 5f5efd278d83
	I0108 12:52:56.570013   10230 command_runner.go:130] > e8a051889a28
	I0108 12:52:56.570017   10230 command_runner.go:130] > e1fcc1a318f0
	I0108 12:52:56.570020   10230 command_runner.go:130] > c87fa6df09c3
	I0108 12:52:56.570024   10230 command_runner.go:130] > 015d397fcc74
	I0108 12:52:56.570035   10230 command_runner.go:130] > 284f82945805
	I0108 12:52:56.570039   10230 command_runner.go:130] > 3af41681452e
	I0108 12:52:56.570042   10230 command_runner.go:130] > f321d9700124
	I0108 12:52:56.570059   10230 command_runner.go:130] > 0f0a2ebaa1f8
	I0108 12:52:56.570068   10230 command_runner.go:130] > adaa05119a60
	I0108 12:52:56.570072   10230 command_runner.go:130] > 56a7fc40cef9
	I0108 12:52:56.570075   10230 command_runner.go:130] > a8533a49b21a
	I0108 12:52:56.572104   10230 docker.go:444] Stopping containers: [102afbd16ebe 0fdc50ce7b7b 87704622b4c0 bec02388b605 5f5efd278d83 e8a051889a28 e1fcc1a318f0 c87fa6df09c3 015d397fcc74 284f82945805 3af41681452e f321d9700124 0f0a2ebaa1f8 adaa05119a60 56a7fc40cef9 a8533a49b21a]
	I0108 12:52:56.572202   10230 ssh_runner.go:195] Run: docker stop 102afbd16ebe 0fdc50ce7b7b 87704622b4c0 bec02388b605 5f5efd278d83 e8a051889a28 e1fcc1a318f0 c87fa6df09c3 015d397fcc74 284f82945805 3af41681452e f321d9700124 0f0a2ebaa1f8 adaa05119a60 56a7fc40cef9 a8533a49b21a
	I0108 12:52:56.593957   10230 command_runner.go:130] > 102afbd16ebe
	I0108 12:52:56.594159   10230 command_runner.go:130] > 0fdc50ce7b7b
	I0108 12:52:56.594170   10230 command_runner.go:130] > 87704622b4c0
	I0108 12:52:56.594175   10230 command_runner.go:130] > bec02388b605
	I0108 12:52:56.594181   10230 command_runner.go:130] > 5f5efd278d83
	I0108 12:52:56.594185   10230 command_runner.go:130] > e8a051889a28
	I0108 12:52:56.594189   10230 command_runner.go:130] > e1fcc1a318f0
	I0108 12:52:56.594194   10230 command_runner.go:130] > c87fa6df09c3
	I0108 12:52:56.594199   10230 command_runner.go:130] > 015d397fcc74
	I0108 12:52:56.594204   10230 command_runner.go:130] > 284f82945805
	I0108 12:52:56.594208   10230 command_runner.go:130] > 3af41681452e
	I0108 12:52:56.594211   10230 command_runner.go:130] > f321d9700124
	I0108 12:52:56.594216   10230 command_runner.go:130] > 0f0a2ebaa1f8
	I0108 12:52:56.594219   10230 command_runner.go:130] > adaa05119a60
	I0108 12:52:56.594224   10230 command_runner.go:130] > 56a7fc40cef9
	I0108 12:52:56.594227   10230 command_runner.go:130] > a8533a49b21a
	I0108 12:52:56.596640   10230 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 12:52:56.607237   10230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 12:52:56.614187   10230 command_runner.go:130] > -rw------- 1 root root 5639 Jan  8 20:49 /etc/kubernetes/admin.conf
	I0108 12:52:56.614198   10230 command_runner.go:130] > -rw------- 1 root root 5652 Jan  8 20:49 /etc/kubernetes/controller-manager.conf
	I0108 12:52:56.614203   10230 command_runner.go:130] > -rw------- 1 root root 2003 Jan  8 20:49 /etc/kubernetes/kubelet.conf
	I0108 12:52:56.614212   10230 command_runner.go:130] > -rw------- 1 root root 5604 Jan  8 20:49 /etc/kubernetes/scheduler.conf
	I0108 12:52:56.614896   10230 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jan  8 20:49 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jan  8 20:49 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2003 Jan  8 20:49 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan  8 20:49 /etc/kubernetes/scheduler.conf
	
	I0108 12:52:56.614961   10230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0108 12:52:56.621657   10230 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I0108 12:52:56.622412   10230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0108 12:52:56.629082   10230 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I0108 12:52:56.629737   10230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0108 12:52:56.637066   10230 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:56.637127   10230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0108 12:52:56.644154   10230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0108 12:52:56.651453   10230 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:56.651512   10230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0108 12:52:56.658752   10230 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 12:52:56.666322   10230 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 12:52:56.666335   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 12:52:56.710529   10230 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 12:52:56.710545   10230 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0108 12:52:56.710753   10230 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0108 12:52:56.710952   10230 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 12:52:56.711290   10230 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0108 12:52:56.711531   10230 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0108 12:52:56.711656   10230 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0108 12:52:56.711833   10230 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0108 12:52:56.711854   10230 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0108 12:52:56.712420   10230 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 12:52:56.712434   10230 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 12:52:56.712443   10230 command_runner.go:130] > [certs] Using the existing "sa" key
	I0108 12:52:56.715507   10230 command_runner.go:130] ! W0108 20:52:56.705990    1166 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0108 12:52:56.715528   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 12:52:56.758836   10230 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 12:52:56.950261   10230 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0108 12:52:57.078955   10230 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0108 12:52:57.122673   10230 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 12:52:57.178930   10230 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 12:52:57.183005   10230 command_runner.go:130] ! W0108 20:52:56.754526    1176 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0108 12:52:57.183028   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 12:52:57.237241   10230 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 12:52:57.237832   10230 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 12:52:57.237842   10230 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0108 12:52:57.311096   10230 command_runner.go:130] ! W0108 20:52:57.223346    1199 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0108 12:52:57.311118   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 12:52:57.357742   10230 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 12:52:57.357754   10230 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 12:52:57.359538   10230 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 12:52:57.360408   10230 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 12:52:57.364205   10230 command_runner.go:130] ! W0108 20:52:57.352313    1233 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0108 12:52:57.364231   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 12:52:57.451663   10230 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 12:52:57.460408   10230 command_runner.go:130] ! W0108 20:52:57.446092    1248 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0108 12:52:57.460441   10230 api_server.go:51] waiting for apiserver process to appear ...
	I0108 12:52:57.460508   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 12:52:57.973186   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 12:52:58.471789   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 12:52:58.971735   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 12:52:58.982583   10230 command_runner.go:130] > 1732
	I0108 12:52:58.983279   10230 api_server.go:71] duration metric: took 1.522857133s to wait for apiserver process to appear ...
	I0108 12:52:58.983292   10230 api_server.go:87] waiting for apiserver healthz status ...
	I0108 12:52:58.983327   10230 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51399/healthz ...
	I0108 12:53:01.508071   10230 api_server.go:278] https://127.0.0.1:51399/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 12:53:01.508100   10230 api_server.go:102] status: https://127.0.0.1:51399/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 12:53:02.008480   10230 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51399/healthz ...
	I0108 12:53:02.015669   10230 api_server.go:278] https://127.0.0.1:51399/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 12:53:02.015688   10230 api_server.go:102] status: https://127.0.0.1:51399/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 12:53:02.508418   10230 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51399/healthz ...
	I0108 12:53:02.515192   10230 api_server.go:278] https://127.0.0.1:51399/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 12:53:02.515211   10230 api_server.go:102] status: https://127.0.0.1:51399/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 12:53:03.008292   10230 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51399/healthz ...
	I0108 12:53:03.014084   10230 api_server.go:278] https://127.0.0.1:51399/healthz returned 200:
	ok
	I0108 12:53:03.014144   10230 round_trippers.go:463] GET https://127.0.0.1:51399/version
	I0108 12:53:03.014150   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:03.014158   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:03.014168   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:03.021122   10230 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0108 12:53:03.021134   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:03.021141   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:03.021147   10230 round_trippers.go:580]     Content-Length: 263
	I0108 12:53:03.021153   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:03 GMT
	I0108 12:53:03.021159   10230 round_trippers.go:580]     Audit-Id: 684e78a1-475c-44d5-a7ff-e3c29595183b
	I0108 12:53:03.021164   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:03.021169   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:03.021173   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:03.021196   10230 request.go:1154] Response Body: {
	  "major": "1",
	  "minor": "25",
	  "gitVersion": "v1.25.3",
	  "gitCommit": "434bfd82814af038ad94d62ebe59b133fcb50506",
	  "gitTreeState": "clean",
	  "buildDate": "2022-10-12T10:49:09Z",
	  "goVersion": "go1.19.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0108 12:53:03.021255   10230 api_server.go:140] control plane version: v1.25.3
	I0108 12:53:03.021264   10230 api_server.go:130] duration metric: took 4.038014039s to wait for apiserver health ...
	I0108 12:53:03.021271   10230 cni.go:95] Creating CNI manager for ""
	I0108 12:53:03.021276   10230 cni.go:156] 3 nodes found, recommending kindnet
	I0108 12:53:03.042576   10230 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 12:53:03.062708   10230 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 12:53:03.067603   10230 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0108 12:53:03.067617   10230 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0108 12:53:03.067622   10230 command_runner.go:130] > Device: 8eh/142d	Inode: 267161      Links: 1
	I0108 12:53:03.067627   10230 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 12:53:03.067637   10230 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0108 12:53:03.067644   10230 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0108 12:53:03.067650   10230 command_runner.go:130] > Change: 2023-01-08 20:27:37.453848555 +0000
	I0108 12:53:03.067653   10230 command_runner.go:130] >  Birth: -
	I0108 12:53:03.067704   10230 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0108 12:53:03.067711   10230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 12:53:03.082948   10230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 12:53:04.549960   10230 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0108 12:53:04.552431   10230 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0108 12:53:04.554814   10230 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0108 12:53:04.570180   10230 command_runner.go:130] > daemonset.apps/kindnet configured
	I0108 12:53:04.637826   10230 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.554871283s)
	I0108 12:53:04.637859   10230 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 12:53:04.637934   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods
	I0108 12:53:04.637942   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:04.637951   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:04.637958   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:04.642031   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:04.642049   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:04.642057   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:04 GMT
	I0108 12:53:04.642064   10230 round_trippers.go:580]     Audit-Id: 31994e7b-307a-4a94-9abd-851474259fcb
	I0108 12:53:04.642070   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:04.642077   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:04.642084   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:04.642091   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:04.643605   10230 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"696"},"items":[{"metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"696","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84480 chars]
	I0108 12:53:04.646687   10230 system_pods.go:59] 12 kube-system pods found
	I0108 12:53:04.646703   10230 system_pods.go:61] "coredns-565d847f94-f6gqj" [1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 12:53:04.646713   10230 system_pods.go:61] "etcd-multinode-124908" [9cf1a608-48d9-453e-bd35-263521e756e4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 12:53:04.646718   10230 system_pods.go:61] "kindnet-4j92t" [2e0611f9-b324-4059-b858-ca1cc99bb8d9] Running
	I0108 12:53:04.646722   10230 system_pods.go:61] "kindnet-79h6s" [8899610c-9df6-488d-af2f-2848f1ce546b] Running
	I0108 12:53:04.646733   10230 system_pods.go:61] "kindnet-pj4l5" [82ac6efa-2268-472b-bd72-171778eabeb6] Running
	I0108 12:53:04.646738   10230 system_pods.go:61] "kube-apiserver-multinode-124908" [7e7e7fa5-c965-4737-83b1-afd48eb87547] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 12:53:04.646742   10230 system_pods.go:61] "kube-controller-manager-multinode-124908" [41ff8cf2-6b35-47c2-8f48-120e6adf98bb] Running
	I0108 12:53:04.646760   10230 system_pods.go:61] "kube-proxy-hq6ms" [3deaa832-bac0-47e3-bdef-482b094bf90f] Running
	I0108 12:53:04.646768   10230 system_pods.go:61] "kube-proxy-kzv6k" [05a4b261-aa83-4e23-83c6-0a50d659b5b7] Running
	I0108 12:53:04.646772   10230 system_pods.go:61] "kube-proxy-vx6bb" [7bff7041-dbf7-4143-9f70-52a12dd69f64] Running
	I0108 12:53:04.646779   10230 system_pods.go:61] "kube-scheduler-multinode-124908" [3dd0df78-6cad-4b47-a66f-74c412846b79] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 12:53:04.646787   10230 system_pods.go:61] "storage-provisioner" [6eda9f8e-814b-4a17-9ec8-89bd52973d7b] Running
	I0108 12:53:04.646792   10230 system_pods.go:74] duration metric: took 8.929012ms to wait for pod list to return data ...
	I0108 12:53:04.646797   10230 node_conditions.go:102] verifying NodePressure condition ...
	I0108 12:53:04.646846   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes
	I0108 12:53:04.646851   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:04.646857   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:04.646864   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:04.650155   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:04.650168   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:04.650174   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:04.650179   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:04.650184   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:04.650188   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:04.650193   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:04 GMT
	I0108 12:53:04.650198   10230 round_trippers.go:580]     Audit-Id: e15e64ec-6068-491f-918b-1d2b6500b142
	I0108 12:53:04.650357   10230 request.go:1154] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"696"},"items":[{"metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 16257 chars]
	I0108 12:53:04.650959   10230 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0108 12:53:04.650970   10230 node_conditions.go:123] node cpu capacity is 6
	I0108 12:53:04.650980   10230 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0108 12:53:04.650983   10230 node_conditions.go:123] node cpu capacity is 6
	I0108 12:53:04.650987   10230 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0108 12:53:04.650990   10230 node_conditions.go:123] node cpu capacity is 6
	I0108 12:53:04.650993   10230 node_conditions.go:105] duration metric: took 4.191267ms to run NodePressure ...
	I0108 12:53:04.651011   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 12:53:04.845350   10230 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0108 12:53:04.878968   10230 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0108 12:53:04.882462   10230 command_runner.go:130] ! W0108 20:53:04.695681    2591 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0108 12:53:04.882486   10230 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0108 12:53:04.882545   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0108 12:53:04.882550   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:04.882561   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:04.882568   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:04.885663   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:04.885676   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:04.885685   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:04 GMT
	I0108 12:53:04.885693   10230 round_trippers.go:580]     Audit-Id: c473700b-a017-49f4-83df-67c734502ca2
	I0108 12:53:04.885701   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:04.885711   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:04.885721   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:04.885729   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:04.886121   10230 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"699"},"items":[{"metadata":{"name":"etcd-multinode-124908","namespace":"kube-system","uid":"9cf1a608-48d9-453e-bd35-263521e756e4","resourceVersion":"691","creationTimestamp":"2023-01-08T20:49:35Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"83cad18480e9029408294e1fc4223245","kubernetes.io/config.mirror":"83cad18480e9029408294e1fc4223245","kubernetes.io/config.seen":"2023-01-08T20:49:35.642390520Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 30901 chars]
	I0108 12:53:04.886873   10230 kubeadm.go:778] kubelet initialised
	I0108 12:53:04.886884   10230 kubeadm.go:779] duration metric: took 4.38897ms waiting for restarted kubelet to initialise ...
	I0108 12:53:04.886890   10230 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 12:53:04.886946   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods
	I0108 12:53:04.886951   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:04.886957   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:04.886963   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:04.890935   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:04.890950   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:04.890958   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:04 GMT
	I0108 12:53:04.890965   10230 round_trippers.go:580]     Audit-Id: 77394af1-9320-4fcb-a335-0542b5bf9807
	I0108 12:53:04.890972   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:04.890978   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:04.890985   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:04.890992   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:04.893087   10230 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"699"},"items":[{"metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"696","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84895 chars]
	I0108 12:53:04.895018   10230 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-f6gqj" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:04.895054   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:04.895059   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:04.895077   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:04.895085   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:04.897564   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:04.897577   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:04.897584   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:04.897590   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:04.897596   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:04.897601   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:04.897606   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:04 GMT
	I0108 12:53:04.897611   10230 round_trippers.go:580]     Audit-Id: 579029bf-6ccb-4889-aa93-21f8ce892022
	I0108 12:53:04.897668   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"696","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6604 chars]
	I0108 12:53:04.897931   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:04.897939   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:04.897945   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:04.897952   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:04.900086   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:04.900096   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:04.900103   10230 round_trippers.go:580]     Audit-Id: 86f0dc24-9c17-4908-b9e5-5fae44248ba2
	I0108 12:53:04.900108   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:04.900114   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:04.900119   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:04.900124   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:04.900129   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:04 GMT
	I0108 12:53:04.900197   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:05.400935   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:05.400956   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:05.400969   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:05.400979   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:05.405074   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:05.405091   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:05.405099   10230 round_trippers.go:580]     Audit-Id: e3be4fc9-a8f6-44a1-82d5-41b4825949b0
	I0108 12:53:05.405113   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:05.405121   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:05.405128   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:05.405135   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:05.405141   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:05 GMT
	I0108 12:53:05.405209   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"696","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6604 chars]
	I0108 12:53:05.405534   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:05.405540   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:05.405548   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:05.405558   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:05.407498   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:05.407507   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:05.407513   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:05.407518   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:05.407523   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:05 GMT
	I0108 12:53:05.407528   10230 round_trippers.go:580]     Audit-Id: 83c4da48-d398-44b3-a3ca-a31158707127
	I0108 12:53:05.407533   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:05.407538   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:05.407588   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:05.902054   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:05.902079   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:05.902092   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:05.902102   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:05.906032   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:05.906047   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:05.906055   10230 round_trippers.go:580]     Audit-Id: 65c8bf4a-eb01-4cda-84da-76f08cf94ff0
	I0108 12:53:05.906061   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:05.906070   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:05.906079   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:05.906098   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:05.906111   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:05 GMT
	I0108 12:53:05.906368   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"696","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6604 chars]
	I0108 12:53:05.906675   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:05.906682   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:05.906688   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:05.906693   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:05.909029   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:05.909039   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:05.909045   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:05.909052   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:05 GMT
	I0108 12:53:05.909057   10230 round_trippers.go:580]     Audit-Id: c23fb512-753a-4896-8486-8854de091847
	I0108 12:53:05.909064   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:05.909069   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:05.909073   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:05.909122   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:06.402304   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:06.402329   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:06.402352   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:06.402388   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:06.406842   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:06.406861   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:06.406869   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:06.406877   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:06 GMT
	I0108 12:53:06.406885   10230 round_trippers.go:580]     Audit-Id: 339cd8f4-81cd-44e7-bf20-7216f87a83c8
	I0108 12:53:06.406891   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:06.406898   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:06.406906   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:06.407359   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"696","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6604 chars]
	I0108 12:53:06.407753   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:06.407760   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:06.407768   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:06.407773   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:06.410095   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:06.410104   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:06.410111   10230 round_trippers.go:580]     Audit-Id: 4281cb28-18c6-4781-a656-d1f21c01eaf8
	I0108 12:53:06.410116   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:06.410121   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:06.410126   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:06.410131   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:06.410138   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:06 GMT
	I0108 12:53:06.410193   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:06.900717   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:06.900744   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:06.900756   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:06.900766   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:06.904846   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:06.904858   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:06.904863   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:06 GMT
	I0108 12:53:06.904868   10230 round_trippers.go:580]     Audit-Id: 6116a822-e898-450e-be8e-b2c7c03aef4c
	I0108 12:53:06.904872   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:06.904877   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:06.904882   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:06.904887   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:06.904943   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:06.905233   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:06.905240   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:06.905246   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:06.905251   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:06.907608   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:06.907618   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:06.907624   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:06.907628   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:06.907634   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:06.907639   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:06.907644   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:06 GMT
	I0108 12:53:06.907648   10230 round_trippers.go:580]     Audit-Id: 1223433b-d1ec-41e2-9007-94cb5d53b27d
	I0108 12:53:06.907707   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:06.907900   10230 pod_ready.go:102] pod "coredns-565d847f94-f6gqj" in "kube-system" namespace has status "Ready":"False"
	I0108 12:53:07.401670   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:07.401699   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:07.401712   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:07.401722   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:07.406266   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:07.406281   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:07.406289   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:07.406295   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:07.406304   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:07.406310   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:07.406317   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:07 GMT
	I0108 12:53:07.406325   10230 round_trippers.go:580]     Audit-Id: 8a440171-1b9d-4511-af18-45eade58537f
	I0108 12:53:07.406397   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:07.406685   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:07.406694   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:07.406700   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:07.406705   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:07.408851   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:07.408860   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:07.408866   10230 round_trippers.go:580]     Audit-Id: deee1fdb-76a1-4950-9ca1-7e5ea74d29fd
	I0108 12:53:07.408874   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:07.408879   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:07.408884   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:07.408891   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:07.408896   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:07 GMT
	I0108 12:53:07.408941   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:07.900978   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:07.901004   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:07.901041   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:07.901053   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:07.905443   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:07.905460   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:07.905468   10230 round_trippers.go:580]     Audit-Id: 43aba184-11ec-4d7b-982a-4e20db65c4d3
	I0108 12:53:07.905481   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:07.905488   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:07.905495   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:07.905505   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:07.905512   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:07 GMT
	I0108 12:53:07.905599   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:07.905888   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:07.905894   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:07.905900   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:07.905906   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:07.908057   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:07.908068   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:07.908080   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:07.908086   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:07 GMT
	I0108 12:53:07.908091   10230 round_trippers.go:580]     Audit-Id: 3c940629-e21a-4abe-a17d-0a67c0770595
	I0108 12:53:07.908096   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:07.908101   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:07.908107   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:07.908298   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:08.400643   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:08.400665   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:08.400679   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:08.400690   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:08.404941   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:08.404954   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:08.404960   10230 round_trippers.go:580]     Audit-Id: 24dbf2c6-f457-4781-9340-3566268bb28b
	I0108 12:53:08.404965   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:08.404970   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:08.404974   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:08.404980   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:08.404984   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:08 GMT
	I0108 12:53:08.405046   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:08.405344   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:08.405350   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:08.405357   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:08.405362   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:08.407621   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:08.407631   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:08.407637   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:08 GMT
	I0108 12:53:08.407642   10230 round_trippers.go:580]     Audit-Id: 081ba777-2efa-4033-a574-2cadbc586a4f
	I0108 12:53:08.407647   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:08.407652   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:08.407657   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:08.407662   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:08.407715   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:08.902591   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:08.902617   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:08.902630   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:08.902639   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:08.906951   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:08.906973   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:08.906982   10230 round_trippers.go:580]     Audit-Id: e01444bd-a440-458c-b67e-93df79a1beba
	I0108 12:53:08.906989   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:08.906995   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:08.907009   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:08.907016   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:08.907022   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:08 GMT
	I0108 12:53:08.907096   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:08.907476   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:08.907483   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:08.907489   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:08.907498   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:08.909344   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:08.909354   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:08.909361   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:08 GMT
	I0108 12:53:08.909366   10230 round_trippers.go:580]     Audit-Id: a59a0b1a-a7b5-477c-a1b5-e9c09253faaf
	I0108 12:53:08.909372   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:08.909377   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:08.909381   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:08.909386   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:08.909439   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:08.909628   10230 pod_ready.go:102] pod "coredns-565d847f94-f6gqj" in "kube-system" namespace has status "Ready":"False"
	I0108 12:53:09.402626   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:09.402646   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:09.402659   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:09.402669   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:09.406895   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:09.406912   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:09.406920   10230 round_trippers.go:580]     Audit-Id: fa404f6f-1d6f-452f-ad28-7dfda2c3794f
	I0108 12:53:09.406930   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:09.406940   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:09.406953   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:09.406960   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:09.406967   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:09 GMT
	I0108 12:53:09.407044   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:09.407336   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:09.407342   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:09.407348   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:09.407353   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:09.409424   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:09.409435   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:09.409440   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:09.409445   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:09.409450   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:09.409455   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:09 GMT
	I0108 12:53:09.409459   10230 round_trippers.go:580]     Audit-Id: 149cf3f3-ebea-4510-957a-6df1827dcd92
	I0108 12:53:09.409465   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:09.409529   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:09.901516   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:09.901542   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:09.901555   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:09.901565   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:09.905927   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:09.905941   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:09.905947   10230 round_trippers.go:580]     Audit-Id: 08e1f89e-bf09-4917-b4be-2405407e5b92
	I0108 12:53:09.905952   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:09.905956   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:09.905964   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:09.905970   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:09.905975   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:09 GMT
	I0108 12:53:09.906031   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:09.906333   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:09.906339   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:09.906346   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:09.906351   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:09.908413   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:09.908422   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:09.908429   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:09.908436   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:09.908442   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:09.908446   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:09.908451   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:09 GMT
	I0108 12:53:09.908457   10230 round_trippers.go:580]     Audit-Id: 1eb2bcbc-47e0-44f2-91b6-e5637f4fb736
	I0108 12:53:09.908520   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:10.400619   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:10.400643   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:10.400656   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:10.400667   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:10.405116   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:10.405130   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:10.405136   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:10.405141   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:10.405146   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:10.405151   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:10 GMT
	I0108 12:53:10.405156   10230 round_trippers.go:580]     Audit-Id: 63139a02-5739-4763-96a1-bd788d59767d
	I0108 12:53:10.405160   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:10.405213   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:10.405504   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:10.405511   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:10.405517   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:10.405523   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:10.407546   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:10.407556   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:10.407563   10230 round_trippers.go:580]     Audit-Id: d1a0c967-be63-4200-b491-469feaafe4fc
	I0108 12:53:10.407568   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:10.407573   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:10.407578   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:10.407585   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:10.407591   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:10 GMT
	I0108 12:53:10.407641   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:10.902569   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:10.902595   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:10.902607   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:10.902617   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:10.907231   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:10.907244   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:10.907249   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:10.907254   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:10.907259   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:10 GMT
	I0108 12:53:10.907264   10230 round_trippers.go:580]     Audit-Id: 0d6beea5-4cb5-441c-a93e-bb1efecf7a72
	I0108 12:53:10.907269   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:10.907274   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:10.907328   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:10.907626   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:10.907633   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:10.907639   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:10.907644   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:10.910064   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:10.910075   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:10.910080   10230 round_trippers.go:580]     Audit-Id: 4bdb1e23-2f77-4a23-b89b-0e2150a86135
	I0108 12:53:10.910085   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:10.910090   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:10.910095   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:10.910100   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:10.910105   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:10 GMT
	I0108 12:53:10.910165   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:10.910353   10230 pod_ready.go:102] pod "coredns-565d847f94-f6gqj" in "kube-system" namespace has status "Ready":"False"
	I0108 12:53:11.401597   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:11.401619   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:11.401633   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:11.401643   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:11.405721   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:11.405737   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:11.405745   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:11.405758   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:11.405766   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:11.405773   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:11 GMT
	I0108 12:53:11.405780   10230 round_trippers.go:580]     Audit-Id: 2f8b6627-8567-43d3-91e1-53a91ad6cb75
	I0108 12:53:11.405786   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:11.405860   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:11.406195   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:11.406201   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:11.406207   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:11.406212   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:11.408410   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:11.408422   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:11.408428   10230 round_trippers.go:580]     Audit-Id: 454b8085-ce61-4818-89cc-69dc6d74824d
	I0108 12:53:11.408433   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:11.408439   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:11.408447   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:11.408454   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:11.408459   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:11 GMT
	I0108 12:53:11.408517   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:11.900520   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:11.900549   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:11.900564   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:11.900608   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:11.904836   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:11.904848   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:11.904854   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:11.904859   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:11.904863   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:11 GMT
	I0108 12:53:11.904868   10230 round_trippers.go:580]     Audit-Id: 0fc78cf7-45dd-4e4e-8d56-6159d7c62129
	I0108 12:53:11.904873   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:11.904878   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:11.904934   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:11.905227   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:11.905233   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:11.905239   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:11.905244   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:11.907630   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:11.907640   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:11.907646   10230 round_trippers.go:580]     Audit-Id: e905667e-5088-4b9d-9ec5-d9264d15e70a
	I0108 12:53:11.907651   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:11.907656   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:11.907661   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:11.907666   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:11.907671   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:11 GMT
	I0108 12:53:11.907721   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:12.401548   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:12.401575   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:12.401588   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:12.401598   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:12.406018   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:12.406031   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:12.406036   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:12.406041   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:12.406045   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:12.406050   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:12.406055   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:12 GMT
	I0108 12:53:12.406060   10230 round_trippers.go:580]     Audit-Id: cfd9fefd-0bd8-474f-b1d7-cf82c87d0e38
	I0108 12:53:12.406120   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:12.406404   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:12.406410   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:12.406416   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:12.406421   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:12.408206   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:12.408216   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:12.408222   10230 round_trippers.go:580]     Audit-Id: 1611999b-3e7e-4c6a-9af2-fde2c6533874
	I0108 12:53:12.408227   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:12.408232   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:12.408237   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:12.408242   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:12.408248   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:12 GMT
	I0108 12:53:12.408671   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:12.901420   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:12.901444   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:12.901456   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:12.901466   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:12.905575   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:12.905588   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:12.905593   10230 round_trippers.go:580]     Audit-Id: 9ec13795-c602-4cda-a089-66513d4fe34b
	I0108 12:53:12.905605   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:12.905610   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:12.905615   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:12.905620   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:12.905625   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:12 GMT
	I0108 12:53:12.905683   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:12.905972   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:12.905979   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:12.905985   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:12.905991   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:12.907998   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:12.908007   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:12.908013   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:12 GMT
	I0108 12:53:12.908018   10230 round_trippers.go:580]     Audit-Id: 25939f30-fa79-4b2b-b819-67c364f92dce
	I0108 12:53:12.908023   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:12.908027   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:12.908032   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:12.908037   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:12.908084   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:13.401642   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:13.401663   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:13.401676   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:13.401687   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:13.406260   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:13.406275   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:13.406281   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:13.406285   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:13.406290   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:13.406295   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:13 GMT
	I0108 12:53:13.406301   10230 round_trippers.go:580]     Audit-Id: e39a8098-b9cf-4481-b585-f0ce7307d0e8
	I0108 12:53:13.406307   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:13.406363   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:13.406647   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:13.406654   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:13.406660   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:13.406666   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:13.408997   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:13.409007   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:13.409012   10230 round_trippers.go:580]     Audit-Id: e8d6252b-53be-4cfd-b6a3-5ea2ceff75e5
	I0108 12:53:13.409018   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:13.409023   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:13.409028   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:13.409033   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:13.409038   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:13 GMT
	I0108 12:53:13.409079   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:13.409264   10230 pod_ready.go:102] pod "coredns-565d847f94-f6gqj" in "kube-system" namespace has status "Ready":"False"
	I0108 12:53:13.902095   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:13.902121   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:13.902134   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:13.902143   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:13.906228   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:13.906245   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:13.906253   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:13.906266   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:13.906273   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:13.906280   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:13 GMT
	I0108 12:53:13.906288   10230 round_trippers.go:580]     Audit-Id: a5efbfde-e311-4de7-b8f6-2b846d8c7db9
	I0108 12:53:13.906296   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:13.906376   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:13.906673   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:13.906681   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:13.906690   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:13.906697   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:13.909030   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:13.909043   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:13.909050   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:13.909055   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:13.909060   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:13.909065   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:13 GMT
	I0108 12:53:13.909070   10230 round_trippers.go:580]     Audit-Id: c4252b24-d6c4-4dd9-90d2-3573f3a69d4c
	I0108 12:53:13.909074   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:13.909208   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:14.400576   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:14.400591   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:14.400598   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:14.400603   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:14.404714   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:14.404728   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:14.404734   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:14 GMT
	I0108 12:53:14.404739   10230 round_trippers.go:580]     Audit-Id: 31fba4f1-e4f3-43e7-9059-894e1dbef4e2
	I0108 12:53:14.404745   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:14.404750   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:14.404755   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:14.404760   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:14.404812   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:14.405107   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:14.405114   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:14.405120   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:14.405125   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:14.407713   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:14.407724   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:14.407730   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:14 GMT
	I0108 12:53:14.407736   10230 round_trippers.go:580]     Audit-Id: d3a50b73-649e-47c3-872b-5cb58ef985ca
	I0108 12:53:14.407742   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:14.407746   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:14.407752   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:14.407757   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:14.407803   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:14.901611   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:14.901638   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:14.901651   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:14.901662   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:14.906384   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:14.906396   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:14.906402   10230 round_trippers.go:580]     Audit-Id: e4ca0c28-4aad-478d-bbfc-803d8ec54ad6
	I0108 12:53:14.906407   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:14.906412   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:14.906417   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:14.906422   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:14.906427   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:14 GMT
	I0108 12:53:14.906488   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:14.906781   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:14.906788   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:14.906794   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:14.906799   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:14.909111   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:14.909120   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:14.909125   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:14.909130   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:14 GMT
	I0108 12:53:14.909135   10230 round_trippers.go:580]     Audit-Id: 34441076-77ce-49c2-a7b5-98828c7da87c
	I0108 12:53:14.909140   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:14.909145   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:14.909150   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:14.909204   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:15.400545   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:15.400566   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:15.400580   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:15.400590   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:15.404808   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:15.404825   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:15.404833   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:15 GMT
	I0108 12:53:15.404841   10230 round_trippers.go:580]     Audit-Id: 6abc98d0-297e-420b-99db-0b701ea3216e
	I0108 12:53:15.404847   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:15.404854   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:15.404861   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:15.404867   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:15.404936   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:15.405256   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:15.405263   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:15.405269   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:15.405281   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:15.407356   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:15.407370   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:15.407380   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:15.407394   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:15.407402   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:15 GMT
	I0108 12:53:15.407407   10230 round_trippers.go:580]     Audit-Id: 34f612f5-85e1-409f-90c1-7de9fe87a42b
	I0108 12:53:15.407412   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:15.407420   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:15.407605   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:15.901013   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:15.901036   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:15.901049   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:15.901058   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:15.905050   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:15.905066   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:15.905074   10230 round_trippers.go:580]     Audit-Id: 91030e44-92b8-4aa9-ab1a-cf0650784ee0
	I0108 12:53:15.905081   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:15.905088   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:15.905097   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:15.905106   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:15.905114   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:15 GMT
	I0108 12:53:15.905201   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:15.905486   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:15.905492   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:15.905498   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:15.905517   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:15.907705   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:15.907714   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:15.907720   10230 round_trippers.go:580]     Audit-Id: 63dcdbd4-079b-44bf-b05d-4dd5cf1a927c
	I0108 12:53:15.907725   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:15.907731   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:15.907736   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:15.907741   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:15.907746   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:15 GMT
	I0108 12:53:15.907821   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:15.908007   10230 pod_ready.go:102] pod "coredns-565d847f94-f6gqj" in "kube-system" namespace has status "Ready":"False"
	I0108 12:53:16.401785   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:16.401810   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:16.401823   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:16.401834   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:16.406307   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:16.406323   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:16.406332   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:16.406338   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:16.406346   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:16 GMT
	I0108 12:53:16.406369   10230 round_trippers.go:580]     Audit-Id: 9e9fd6f6-7ba7-4dd5-ada7-e6d2c1c283a7
	I0108 12:53:16.406374   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:16.406379   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:16.406432   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:16.406720   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:16.406727   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:16.406733   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:16.406739   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:16.408737   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:16.408746   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:16.408753   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:16.408759   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:16.408766   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:16.408771   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:16.408776   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:16 GMT
	I0108 12:53:16.408780   10230 round_trippers.go:580]     Audit-Id: 4fd2ae85-585f-4575-b8f5-2ca56f54ea61
	I0108 12:53:16.408844   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:16.902488   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:16.902511   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:16.902523   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:16.902534   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:16.906420   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:16.906435   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:16.906443   10230 round_trippers.go:580]     Audit-Id: 15ca3d61-78df-40cb-b805-425d65d48bd2
	I0108 12:53:16.906450   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:16.906458   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:16.906467   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:16.906476   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:16.906482   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:16 GMT
	I0108 12:53:16.906891   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:16.907185   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:16.907196   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:16.907205   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:16.907213   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:16.908795   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:16.908806   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:16.908814   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:16.908820   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:16.908825   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:16.908829   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:16.908843   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:16 GMT
	I0108 12:53:16.908851   10230 round_trippers.go:580]     Audit-Id: 80c2470e-71d5-43c6-a0ab-09b0ffef8725
	I0108 12:53:16.908992   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:17.402468   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:17.402487   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:17.402499   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:17.402510   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:17.406665   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:17.406682   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:17.406690   10230 round_trippers.go:580]     Audit-Id: 2cb32221-fa9f-4e97-897e-55105c794b4a
	I0108 12:53:17.406699   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:17.406719   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:17.406726   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:17.406735   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:17.406743   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:17 GMT
	I0108 12:53:17.406825   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:17.407156   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:17.407163   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:17.407169   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:17.407174   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:17.409602   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:17.409613   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:17.409619   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:17.409627   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:17 GMT
	I0108 12:53:17.409633   10230 round_trippers.go:580]     Audit-Id: 2d38e923-0650-441e-a2a6-63f10808e1aa
	I0108 12:53:17.409645   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:17.409651   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:17.409656   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:17.409800   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:17.902474   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:17.902502   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:17.902515   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:17.902526   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:17.906759   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:17.906775   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:17.906783   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:17.906790   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:17.906797   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:17 GMT
	I0108 12:53:17.906803   10230 round_trippers.go:580]     Audit-Id: c3268ac1-ae94-4431-a034-f6fdd1206609
	I0108 12:53:17.906814   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:17.906821   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:17.906889   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:17.907199   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:17.907206   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:17.907212   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:17.907217   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:17.909451   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:17.909461   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:17.909467   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:17.909472   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:17 GMT
	I0108 12:53:17.909477   10230 round_trippers.go:580]     Audit-Id: 0873bcf6-604a-4c02-8aac-ef60aee2ca2e
	I0108 12:53:17.909482   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:17.909487   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:17.909492   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:17.909538   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:17.909718   10230 pod_ready.go:102] pod "coredns-565d847f94-f6gqj" in "kube-system" namespace has status "Ready":"False"
	I0108 12:53:18.400413   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:18.400437   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:18.400450   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:18.400461   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:18.404469   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:18.404479   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:18.404484   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:18.404489   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:18.404494   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:18.404499   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:18 GMT
	I0108 12:53:18.404505   10230 round_trippers.go:580]     Audit-Id: 7b014a1a-48a2-4559-beca-dabd8cf065d5
	I0108 12:53:18.404509   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:18.404555   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:18.404833   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:18.404840   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:18.404846   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:18.404851   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:18.407012   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:18.407020   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:18.407026   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:18.407031   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:18.407036   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:18 GMT
	I0108 12:53:18.407041   10230 round_trippers.go:580]     Audit-Id: 4ca59c1b-9703-428d-8102-77abcc326ad3
	I0108 12:53:18.407047   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:18.407051   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:18.407177   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:18.901752   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:18.901778   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:18.901801   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:18.901812   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:18.906416   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:18.906432   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:18.906440   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:18.906446   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:18.906452   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:18.906460   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:18 GMT
	I0108 12:53:18.906466   10230 round_trippers.go:580]     Audit-Id: 4857704d-7df2-4e13-8c75-3d95e18fc015
	I0108 12:53:18.906473   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:18.906547   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:18.906844   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:18.906850   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:18.906856   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:18.906870   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:18.909028   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:18.909037   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:18.909044   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:18 GMT
	I0108 12:53:18.909049   10230 round_trippers.go:580]     Audit-Id: 7ee56434-087c-45a3-ac07-e6af86325d0d
	I0108 12:53:18.909056   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:18.909061   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:18.909065   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:18.909070   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:18.909121   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:19.402461   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:19.402484   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:19.402497   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:19.402508   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:19.406907   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:19.406920   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:19.406925   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:19.406930   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:19.406935   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:19 GMT
	I0108 12:53:19.406940   10230 round_trippers.go:580]     Audit-Id: dfcf0c57-1209-4908-a86d-aecf2d920be0
	I0108 12:53:19.406945   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:19.406949   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:19.406998   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:19.407286   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:19.407294   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:19.407301   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:19.407306   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:19.409219   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:19.409229   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:19.409236   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:19.409242   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:19.409247   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:19 GMT
	I0108 12:53:19.409253   10230 round_trippers.go:580]     Audit-Id: 6ac1d95d-7850-4b69-a979-ef2961c21f6a
	I0108 12:53:19.409259   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:19.409266   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:19.409489   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:19.902438   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:19.902461   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:19.902474   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:19.902484   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:19.906837   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:19.906851   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:19.906858   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:19.906862   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:19.906868   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:19.906873   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:19.906878   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:19 GMT
	I0108 12:53:19.906883   10230 round_trippers.go:580]     Audit-Id: 58c006a8-c967-48ac-8a69-ffb05fe531ee
	I0108 12:53:19.906937   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:19.907227   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:19.907234   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:19.907240   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:19.907247   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:19.909358   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:19.909369   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:19.909374   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:19.909380   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:19.909396   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:19.909404   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:19.909410   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:19 GMT
	I0108 12:53:19.909416   10230 round_trippers.go:580]     Audit-Id: c60990e9-6029-46ff-b5d7-da42f950c0ed
	I0108 12:53:19.909472   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:20.401792   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:20.401814   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:20.401827   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:20.401837   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:20.406255   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:20.406267   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:20.406272   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:20.406279   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:20 GMT
	I0108 12:53:20.406286   10230 round_trippers.go:580]     Audit-Id: 7910fbf0-f910-4c71-8de1-eecec1bf70bb
	I0108 12:53:20.406292   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:20.406296   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:20.406301   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:20.406357   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:20.406646   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:20.406652   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:20.406659   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:20.406667   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:20.408621   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:20.408631   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:20.408636   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:20.408642   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:20.408647   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:20.408652   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:20 GMT
	I0108 12:53:20.408657   10230 round_trippers.go:580]     Audit-Id: 1b8378f3-da8f-4413-8140-a971035373ca
	I0108 12:53:20.408662   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:20.408715   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:20.408892   10230 pod_ready.go:102] pod "coredns-565d847f94-f6gqj" in "kube-system" namespace has status "Ready":"False"
	I0108 12:53:20.901666   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:20.901692   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:20.901729   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:20.901740   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:20.905569   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:20.905581   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:20.905587   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:20.905593   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:20.905597   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:20.905602   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:20 GMT
	I0108 12:53:20.905607   10230 round_trippers.go:580]     Audit-Id: 7537b795-bae0-4d32-8762-8cd9df9e46df
	I0108 12:53:20.905612   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:20.905667   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:20.905961   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:20.905969   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:20.905977   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:20.905987   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:20.908220   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:20.908229   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:20.908234   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:20.908239   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:20.908243   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:20 GMT
	I0108 12:53:20.908248   10230 round_trippers.go:580]     Audit-Id: fbac2927-eff4-470f-9ece-c6beb8fa62c3
	I0108 12:53:20.908253   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:20.908258   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:20.908309   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:21.402358   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:21.402382   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:21.402395   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:21.402405   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:21.406134   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:21.406144   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:21.406150   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:21.406154   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:21.406159   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:21 GMT
	I0108 12:53:21.406164   10230 round_trippers.go:580]     Audit-Id: 4d3e4f91-c5da-4cc0-8cfa-ad203450641b
	I0108 12:53:21.406169   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:21.406173   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:21.406477   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:21.406793   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:21.406800   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:21.406806   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:21.406811   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:21.409063   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:21.409074   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:21.409079   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:21.409084   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:21.409089   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:21.409094   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:21.409099   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:21 GMT
	I0108 12:53:21.409104   10230 round_trippers.go:580]     Audit-Id: fa4d06fd-8316-4765-a047-bb3d8c1daffc
	I0108 12:53:21.409153   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:21.900510   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:21.900536   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:21.900548   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:21.900558   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:21.904983   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:21.904997   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:21.905004   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:21.905009   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:21.905015   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:21.905023   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:21 GMT
	I0108 12:53:21.905028   10230 round_trippers.go:580]     Audit-Id: b1da2cbf-e8dc-42de-8532-21fc93d74fb7
	I0108 12:53:21.905033   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:21.905090   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:21.905377   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:21.905383   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:21.905389   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:21.905395   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:21.907375   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:21.907384   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:21.907391   10230 round_trippers.go:580]     Audit-Id: f553963c-3885-484f-9553-045cb801bbfc
	I0108 12:53:21.907396   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:21.907402   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:21.907406   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:21.907411   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:21.907416   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:21 GMT
	I0108 12:53:21.907475   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:22.401280   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:22.401313   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:22.401326   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:22.401336   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:22.405879   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:22.405891   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:22.405897   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:22.405909   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:22.405915   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:22 GMT
	I0108 12:53:22.405920   10230 round_trippers.go:580]     Audit-Id: be29bbfd-000f-4676-8f27-bb553493de52
	I0108 12:53:22.405925   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:22.405930   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:22.405981   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:22.406270   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:22.406277   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:22.406283   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:22.406288   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:22.408044   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:22.408056   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:22.408064   10230 round_trippers.go:580]     Audit-Id: dd10bc20-5a4e-44cc-ae0f-c28640e35646
	I0108 12:53:22.408071   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:22.408079   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:22.408085   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:22.408093   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:22.408107   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:22 GMT
	I0108 12:53:22.408485   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:22.900361   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:22.900387   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:22.900400   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:22.900409   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:22.904191   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:22.904204   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:22.904211   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:22.904216   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:22.904220   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:22.904227   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:22 GMT
	I0108 12:53:22.904231   10230 round_trippers.go:580]     Audit-Id: 48409349-ed45-4594-8ea3-c1a47b7f711b
	I0108 12:53:22.904236   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:22.904601   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:22.904893   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:22.904900   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:22.904906   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:22.904911   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:22.907026   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:22.907035   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:22.907041   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:22.907045   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:22.907050   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:22 GMT
	I0108 12:53:22.907055   10230 round_trippers.go:580]     Audit-Id: 71982930-1743-4668-a28c-8d65a6de135e
	I0108 12:53:22.907060   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:22.907065   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:22.907111   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:22.907301   10230 pod_ready.go:102] pod "coredns-565d847f94-f6gqj" in "kube-system" namespace has status "Ready":"False"
	I0108 12:53:23.401854   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:23.401875   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:23.401888   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:23.401899   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:23.406364   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:23.406380   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:23.406388   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:23.406397   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:23.406405   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:23 GMT
	I0108 12:53:23.406411   10230 round_trippers.go:580]     Audit-Id: 5fc4e16d-5909-457c-b6e3-01c1f3d22272
	I0108 12:53:23.406418   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:23.406425   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:23.406492   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:23.406795   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:23.406802   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:23.406808   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:23.406813   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:23.409009   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:23.409019   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:23.409026   10230 round_trippers.go:580]     Audit-Id: 1789df30-a3a3-4045-90ed-b2c02fe9a947
	I0108 12:53:23.409032   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:23.409037   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:23.409046   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:23.409052   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:23.409057   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:23 GMT
	I0108 12:53:23.409118   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:23.900995   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:23.901021   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:23.901033   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:23.901043   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:23.905417   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:23.905429   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:23.905434   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:23.905440   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:23 GMT
	I0108 12:53:23.905444   10230 round_trippers.go:580]     Audit-Id: ef352e6e-433f-4e5e-a0b2-ae7f0f1512cd
	I0108 12:53:23.905450   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:23.905454   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:23.905459   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:23.905528   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:23.905833   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:23.905839   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:23.905848   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:23.905865   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:23.907989   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:23.907999   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:23.908005   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:23 GMT
	I0108 12:53:23.908010   10230 round_trippers.go:580]     Audit-Id: 3bcb21f7-efbe-48a2-9024-cbf8d08a3aa8
	I0108 12:53:23.908015   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:23.908020   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:23.908025   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:23.908030   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:23.908087   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:24.401859   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:24.401880   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:24.401893   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:24.401902   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:24.406348   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:24.406363   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:24.406371   10230 round_trippers.go:580]     Audit-Id: 4deb5dc1-3554-4742-870b-49b5ac2e115b
	I0108 12:53:24.406378   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:24.406385   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:24.406397   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:24.406405   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:24.406411   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:24 GMT
	I0108 12:53:24.406484   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:24.406841   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:24.406847   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:24.406853   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:24.406858   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:24.408896   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:24.408906   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:24.408911   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:24.408916   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:24.408922   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:24.408926   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:24.408931   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:24 GMT
	I0108 12:53:24.408936   10230 round_trippers.go:580]     Audit-Id: 21a408db-46d7-4fce-8203-d1b49e59b012
	I0108 12:53:24.408983   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:24.902363   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:24.902390   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:24.902402   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:24.902412   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:24.907155   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:24.907169   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:24.907175   10230 round_trippers.go:580]     Audit-Id: 301ecd7a-32cf-4607-aaa8-2e20214ca984
	I0108 12:53:24.907180   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:24.907189   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:24.907195   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:24.907200   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:24.907204   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:24 GMT
	I0108 12:53:24.907261   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:24.907555   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:24.907561   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:24.907567   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:24.907572   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:24.909793   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:24.909802   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:24.909807   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:24.909812   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:24.909817   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:24.909826   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:24.909832   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:24 GMT
	I0108 12:53:24.909836   10230 round_trippers.go:580]     Audit-Id: 3b17ef80-ede1-417e-8892-cfa6dda0a4b4
	I0108 12:53:24.909885   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:24.910079   10230 pod_ready.go:102] pod "coredns-565d847f94-f6gqj" in "kube-system" namespace has status "Ready":"False"
	I0108 12:53:25.400330   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:25.400350   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:25.400362   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:25.400372   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:25.404087   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:25.404115   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:25.404121   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:25.404126   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:25.404132   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:25.404136   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:25.404143   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:25 GMT
	I0108 12:53:25.404152   10230 round_trippers.go:580]     Audit-Id: 1523ac6d-60a0-45d7-b65d-1520214807b7
	I0108 12:53:25.404213   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:25.404501   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:25.404508   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:25.404514   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:25.404519   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:25.406290   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:25.406299   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:25.406305   10230 round_trippers.go:580]     Audit-Id: ed0fba3a-11e2-4bb5-b9e5-971494a0f31c
	I0108 12:53:25.406310   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:25.406317   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:25.406322   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:25.406331   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:25.406338   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:25 GMT
	I0108 12:53:25.406507   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:25.902301   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:25.902330   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:25.902344   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:25.902355   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:25.906309   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:25.906333   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:25.906342   10230 round_trippers.go:580]     Audit-Id: 58482e80-7110-4619-acd3-26f80a47e283
	I0108 12:53:25.906349   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:25.906356   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:25.906362   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:25.906370   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:25.906383   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:25 GMT
	I0108 12:53:25.906511   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:25.906846   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:25.906854   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:25.906860   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:25.906865   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:25.909156   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:25.909166   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:25.909171   10230 round_trippers.go:580]     Audit-Id: 9c9072c5-be39-4190-ad2c-f688615a513c
	I0108 12:53:25.909177   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:25.909182   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:25.909187   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:25.909194   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:25.909200   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:25 GMT
	I0108 12:53:25.909249   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:26.401256   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:26.401279   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:26.401292   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:26.401302   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:26.405668   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:26.405681   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:26.405689   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:26.405696   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:26.405701   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:26 GMT
	I0108 12:53:26.405711   10230 round_trippers.go:580]     Audit-Id: 70df19b6-4cc6-4ad6-9eaa-45888e4ec5f5
	I0108 12:53:26.405717   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:26.405722   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:26.405793   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:26.406104   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:26.406111   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:26.406117   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:26.406122   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:26.408210   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:26.408220   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:26.408226   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:26.408231   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:26.408236   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:26 GMT
	I0108 12:53:26.408241   10230 round_trippers.go:580]     Audit-Id: 187e4b70-80cd-44b0-acf2-36cfbfb4e117
	I0108 12:53:26.408246   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:26.408251   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:26.408302   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:26.900317   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:26.900343   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:26.900358   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:26.900368   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:26.904258   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:26.904268   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:26.904274   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:26.904279   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:26.904284   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:26.904292   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:26 GMT
	I0108 12:53:26.904296   10230 round_trippers.go:580]     Audit-Id: de495b00-b6fe-456f-a6d1-04fc79eb728d
	I0108 12:53:26.904301   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:26.904421   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:26.904719   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:26.904726   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:26.904732   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:26.904737   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:26.906673   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:26.906683   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:26.906689   10230 round_trippers.go:580]     Audit-Id: e932223a-1b1c-443d-900f-822fd50c9bb8
	I0108 12:53:26.906694   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:26.906699   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:26.906704   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:26.906708   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:26.906713   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:26 GMT
	I0108 12:53:26.906949   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:27.400506   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:27.400535   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:27.400549   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:27.400560   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:27.404653   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:27.404668   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:27.404689   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:27.404695   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:27.404699   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:27.404703   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:27 GMT
	I0108 12:53:27.404708   10230 round_trippers.go:580]     Audit-Id: 1f95bf5f-60a9-4fac-a58c-e329fa0675e5
	I0108 12:53:27.404714   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:27.404770   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:27.405062   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:27.405068   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:27.405074   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:27.405079   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:27.407145   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:27.407155   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:27.407161   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:27 GMT
	I0108 12:53:27.407167   10230 round_trippers.go:580]     Audit-Id: 5b403726-ecd5-45c7-a056-437a135b5f72
	I0108 12:53:27.407173   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:27.407178   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:27.407182   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:27.407187   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:27.407247   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:27.407430   10230 pod_ready.go:102] pod "coredns-565d847f94-f6gqj" in "kube-system" namespace has status "Ready":"False"
	I0108 12:53:27.901462   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:27.901487   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:27.901499   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:27.901509   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:27.905797   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:27.905811   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:27.905816   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:27.905826   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:27 GMT
	I0108 12:53:27.905832   10230 round_trippers.go:580]     Audit-Id: cfda5a56-a569-4e52-b8d8-855af286e543
	I0108 12:53:27.905836   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:27.905841   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:27.905847   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:27.905913   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:27.906205   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:27.906212   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:27.906218   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:27.906223   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:27.908189   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:27.908201   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:27.908206   10230 round_trippers.go:580]     Audit-Id: f197df51-ed54-4116-8edf-9115819dea5a
	I0108 12:53:27.908212   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:27.908216   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:27.908222   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:27.908226   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:27.908232   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:27 GMT
	I0108 12:53:27.908294   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:28.402322   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:28.402348   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:28.402360   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:28.402370   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:28.406692   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:28.406708   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:28.406716   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:28.406723   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:28.406729   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:28.406736   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:28 GMT
	I0108 12:53:28.406742   10230 round_trippers.go:580]     Audit-Id: 441cee74-03c6-4231-ba3f-c34f0b4d49db
	I0108 12:53:28.406749   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:28.406823   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:28.407185   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:28.407192   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:28.407200   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:28.407206   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:28.409542   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:28.409552   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:28.409557   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:28.409569   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:28.409575   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:28.409579   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:28 GMT
	I0108 12:53:28.409585   10230 round_trippers.go:580]     Audit-Id: 96932676-d079-4569-a3a7-863cd219b237
	I0108 12:53:28.409590   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:28.409642   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:28.900543   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:28.900572   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:28.900586   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:28.900623   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:28.905189   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:28.905201   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:28.905207   10230 round_trippers.go:580]     Audit-Id: 70370d8c-7e06-4264-80b6-68806ba6c2b0
	I0108 12:53:28.905212   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:28.905217   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:28.905222   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:28.905226   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:28.905231   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:28 GMT
	I0108 12:53:28.905291   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:28.905621   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:28.905628   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:28.905634   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:28.905640   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:28.907795   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:28.907804   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:28.907810   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:28 GMT
	I0108 12:53:28.907815   10230 round_trippers.go:580]     Audit-Id: fcf5e9c1-9041-4ca6-95af-4905f9712653
	I0108 12:53:28.907820   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:28.907825   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:28.907830   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:28.907841   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:28.907927   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:29.402308   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:29.402334   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:29.402347   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:29.402357   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:29.406691   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:29.406704   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:29.406710   10230 round_trippers.go:580]     Audit-Id: 2a77bc9d-8f5a-4c76-bf0f-7e974b383b6a
	I0108 12:53:29.406715   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:29.406731   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:29.406739   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:29.406748   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:29.406754   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:29 GMT
	I0108 12:53:29.406817   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:29.407106   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:29.407112   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:29.407118   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:29.407124   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:29.409277   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:29.409287   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:29.409293   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:29.409299   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:29.409304   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:29.409308   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:29.409314   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:29 GMT
	I0108 12:53:29.409318   10230 round_trippers.go:580]     Audit-Id: 207f303c-463e-452e-bc79-90a410d7c248
	I0108 12:53:29.409379   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:29.409564   10230 pod_ready.go:102] pod "coredns-565d847f94-f6gqj" in "kube-system" namespace has status "Ready":"False"
	I0108 12:53:29.900777   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:29.900807   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:29.900821   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:29.900833   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:29.905021   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:29.905039   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:29.905047   10230 round_trippers.go:580]     Audit-Id: 9b1a72e5-c7ce-459b-8d14-41db8f1057d0
	I0108 12:53:29.905060   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:29.905068   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:29.905074   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:29.905081   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:29.905088   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:29 GMT
	I0108 12:53:29.905180   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:29.905492   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:29.905498   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:29.905506   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:29.905512   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:29.907736   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:29.907745   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:29.907750   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:29.907756   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:29.907762   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:29 GMT
	I0108 12:53:29.907767   10230 round_trippers.go:580]     Audit-Id: 6912b489-e2b5-4b52-b2d3-1f3924665358
	I0108 12:53:29.907772   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:29.907777   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:29.907839   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:30.400604   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:30.400630   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:30.400642   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:30.400652   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:30.404591   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:30.404605   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:30.404611   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:30.404616   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:30.404621   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:30 GMT
	I0108 12:53:30.404625   10230 round_trippers.go:580]     Audit-Id: bb29b27f-54fc-4d4c-9ead-4f99b5bc2320
	I0108 12:53:30.404631   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:30.404636   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:30.404724   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:30.405048   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:30.405055   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:30.405061   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:30.405066   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:30.407081   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:30.407090   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:30.407097   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:30.407103   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:30.407108   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:30.407113   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:30 GMT
	I0108 12:53:30.407118   10230 round_trippers.go:580]     Audit-Id: 62d876d2-4387-44b6-b623-4e6a6e00fdcc
	I0108 12:53:30.407123   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:30.407180   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:30.900565   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:30.900592   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:30.900605   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:30.900615   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:30.905097   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:30.905110   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:30.905116   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:30.905120   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:30.905124   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:30.905129   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:30.905133   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:30 GMT
	I0108 12:53:30.905137   10230 round_trippers.go:580]     Audit-Id: cc113f36-ef16-4e59-8e55-c8935a20396f
	I0108 12:53:30.905205   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:30.905497   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:30.905503   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:30.905509   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:30.905514   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:30.907583   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:30.907592   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:30.907598   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:30.907603   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:30 GMT
	I0108 12:53:30.907608   10230 round_trippers.go:580]     Audit-Id: 267c3cdc-e948-4d55-a666-a37c8819207f
	I0108 12:53:30.907612   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:30.907620   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:30.907625   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:30.907697   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:31.402368   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:31.402390   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:31.402403   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:31.402414   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:31.406695   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:31.406712   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:31.406720   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:31 GMT
	I0108 12:53:31.406766   10230 round_trippers.go:580]     Audit-Id: 9905734c-bfb8-4520-b7c5-81f2c15194d8
	I0108 12:53:31.406775   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:31.406781   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:31.406802   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:31.406807   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:31.406872   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:31.407192   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:31.407198   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:31.407204   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:31.407210   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:31.409118   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:31.409128   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:31.409134   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:31.409139   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:31.409144   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:31 GMT
	I0108 12:53:31.409149   10230 round_trippers.go:580]     Audit-Id: d51ed3c5-d662-4e68-a68f-7bfd2295fb35
	I0108 12:53:31.409154   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:31.409159   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:31.409227   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:31.900785   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:31.900811   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:31.900824   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:31.900885   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:31.904694   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:31.904711   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:31.904719   10230 round_trippers.go:580]     Audit-Id: b19c7977-5603-4ef0-b8cf-e91fd1609d10
	I0108 12:53:31.904732   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:31.904740   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:31.904746   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:31.904752   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:31.904759   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:31 GMT
	I0108 12:53:31.904977   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:31.905290   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:31.905297   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:31.905303   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:31.905309   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:31.907376   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:31.907385   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:31.907390   10230 round_trippers.go:580]     Audit-Id: 1f420ce4-148f-496d-98fe-a7d5389adfac
	I0108 12:53:31.907395   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:31.907400   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:31.907405   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:31.907410   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:31.907414   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:31 GMT
	I0108 12:53:31.907756   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:31.908204   10230 pod_ready.go:102] pod "coredns-565d847f94-f6gqj" in "kube-system" namespace has status "Ready":"False"
	I0108 12:53:32.400252   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:32.400278   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:32.400290   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:32.400300   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:32.404077   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:32.404092   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:32.404098   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:32.404104   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:32.404109   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:32.404114   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:32.404119   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:32 GMT
	I0108 12:53:32.404124   10230 round_trippers.go:580]     Audit-Id: 27a826e0-dc84-4ceb-ad5c-0f04906eb3a5
	I0108 12:53:32.404183   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:32.404478   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:32.404486   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:32.404492   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:32.404497   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:32.406895   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:32.406905   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:32.406911   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:32.406916   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:32.406922   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:32.406927   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:32 GMT
	I0108 12:53:32.406932   10230 round_trippers.go:580]     Audit-Id: 0e056654-e8e7-4502-ad34-3cf88df1b44d
	I0108 12:53:32.406936   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:32.406984   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:32.900689   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:32.900714   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:32.900727   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:32.900737   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:32.904928   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:32.904951   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:32.904962   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:32 GMT
	I0108 12:53:32.904971   10230 round_trippers.go:580]     Audit-Id: 01bd709a-c4b2-44e3-93f0-47c85e8686d4
	I0108 12:53:32.904980   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:32.904989   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:32.904996   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:32.905003   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:32.905138   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:32.905462   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:32.905469   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:32.905475   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:32.905481   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:32.907362   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:32.907396   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:32.907406   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:32.907411   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:32.907419   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:32.907423   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:32.907428   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:32 GMT
	I0108 12:53:32.907434   10230 round_trippers.go:580]     Audit-Id: d467aab8-463b-4fc9-b7a4-c47423207dc4
	I0108 12:53:32.907495   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:33.402248   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:33.402270   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:33.402293   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:33.402304   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:33.406679   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:33.406691   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:33.406697   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:33.406702   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:33.406708   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:33.406713   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:33.406718   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:33 GMT
	I0108 12:53:33.406722   10230 round_trippers.go:580]     Audit-Id: 68f86963-55bb-4209-9ca5-720a5aed892e
	I0108 12:53:33.406795   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:33.407087   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:33.407093   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:33.407100   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:33.407105   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:33.408965   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:33.408973   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:33.408979   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:33.408984   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:33.408989   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:33 GMT
	I0108 12:53:33.408994   10230 round_trippers.go:580]     Audit-Id: 4575536c-5f84-4939-a93d-7cd5ce1e9fcc
	I0108 12:53:33.408999   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:33.409004   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:33.409055   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:33.901797   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:33.901825   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:33.901840   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:33.901850   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:33.906154   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:33.906168   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:33.906174   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:33.906179   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:33.906184   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:33.906189   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:33.906193   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:33 GMT
	I0108 12:53:33.906198   10230 round_trippers.go:580]     Audit-Id: 7a1f660e-e7a1-4406-ab6c-94eed0b34f9d
	I0108 12:53:33.906273   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:33.906566   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:33.906573   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:33.906579   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:33.906584   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:33.908556   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:33.908567   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:33.908572   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:33.908577   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:33.908583   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:33 GMT
	I0108 12:53:33.908588   10230 round_trippers.go:580]     Audit-Id: 74e17cee-66f4-4794-aa84-1adf06c31bfc
	I0108 12:53:33.908593   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:33.908599   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:33.908696   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:33.908882   10230 pod_ready.go:102] pod "coredns-565d847f94-f6gqj" in "kube-system" namespace has status "Ready":"False"
	I0108 12:53:34.400409   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:34.400432   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:34.400445   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:34.400455   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:34.404611   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:34.404624   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:34.404635   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:34.404640   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:34.404645   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:34 GMT
	I0108 12:53:34.404650   10230 round_trippers.go:580]     Audit-Id: 40421b1a-f445-45a1-8698-2a8fc0f33285
	I0108 12:53:34.404655   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:34.404660   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:34.404706   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:34.404990   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:34.404997   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:34.405003   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:34.405015   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:34.407210   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:34.407220   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:34.407225   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:34.407231   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:34.407236   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:34.407241   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:34.407246   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:34 GMT
	I0108 12:53:34.407250   10230 round_trippers.go:580]     Audit-Id: 4e60496b-3ec9-4a1f-a1af-6d7448914c00
	I0108 12:53:34.407302   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:34.902053   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:34.902081   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:34.902094   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:34.902106   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:34.906369   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:34.906385   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:34.906392   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:34.906399   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:34.906405   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:34 GMT
	I0108 12:53:34.906413   10230 round_trippers.go:580]     Audit-Id: 7cd289e8-d4f0-43a4-b3a1-58b98bcfaf92
	I0108 12:53:34.906419   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:34.906425   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:34.906488   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:34.906826   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:34.906833   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:34.906839   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:34.906844   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:34.909205   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:34.909213   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:34.909218   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:34.909223   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:34.909228   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:34.909233   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:34.909238   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:34 GMT
	I0108 12:53:34.909243   10230 round_trippers.go:580]     Audit-Id: b2c5e9d0-7d84-4442-8ad0-a827e1f5e4ae
	I0108 12:53:34.909294   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:35.402214   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:35.402242   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:35.402255   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:35.402265   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:35.407950   10230 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 12:53:35.407963   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:35.407968   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:35.407973   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:35.407978   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:35.407983   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:35 GMT
	I0108 12:53:35.407989   10230 round_trippers.go:580]     Audit-Id: 15c175c0-55c6-41d7-b60a-9ea168af00a0
	I0108 12:53:35.407994   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:35.408059   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:35.408341   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:35.408347   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:35.408353   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:35.408358   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:35.410777   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:35.410787   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:35.410793   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:35.410797   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:35.410803   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:35 GMT
	I0108 12:53:35.410807   10230 round_trippers.go:580]     Audit-Id: 99e0a46c-5796-4210-b4cf-f72b72d0c76e
	I0108 12:53:35.410812   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:35.410823   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:35.410866   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:35.900972   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:35.900999   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:35.901012   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:35.901021   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:35.905540   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:35.905552   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:35.905558   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:35.905563   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:35.905571   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:35.905591   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:35 GMT
	I0108 12:53:35.905603   10230 round_trippers.go:580]     Audit-Id: daa3756c-116c-4fc6-93de-32b42589546a
	I0108 12:53:35.905612   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:35.905683   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:35.905966   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:35.905973   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:35.905979   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:35.905985   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:35.908178   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:35.908187   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:35.908193   10230 round_trippers.go:580]     Audit-Id: b64625c4-ec5a-4826-86e5-363d015d8b56
	I0108 12:53:35.908199   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:35.908203   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:35.908208   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:35.908213   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:35.908218   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:35 GMT
	I0108 12:53:35.908260   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:36.400154   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:36.400177   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:36.400191   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:36.400201   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:36.404156   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:36.404168   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:36.404174   10230 round_trippers.go:580]     Audit-Id: a69e0494-8932-4573-bd1b-d256c2a4d5bb
	I0108 12:53:36.404184   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:36.404190   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:36.404194   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:36.404199   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:36.404204   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:36 GMT
	I0108 12:53:36.404244   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:36.404538   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:36.404545   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:36.404551   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:36.404556   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:36.406522   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:36.406531   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:36.406537   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:36.406542   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:36.406547   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:36.406552   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:36 GMT
	I0108 12:53:36.406557   10230 round_trippers.go:580]     Audit-Id: 1e159bb2-7c02-49ea-b773-abf7fd71b954
	I0108 12:53:36.406561   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:36.406602   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:36.406775   10230 pod_ready.go:102] pod "coredns-565d847f94-f6gqj" in "kube-system" namespace has status "Ready":"False"
	I0108 12:53:36.900794   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:36.900819   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:36.900832   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:36.900875   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:36.905007   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:36.905020   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:36.905029   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:36 GMT
	I0108 12:53:36.905036   10230 round_trippers.go:580]     Audit-Id: ae061a2a-8fd7-4196-8c82-0ff5ce057262
	I0108 12:53:36.905041   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:36.905046   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:36.905050   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:36.905055   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:36.905178   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:36.905479   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:36.905487   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:36.905494   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:36.905499   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:36.907643   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:36.907655   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:36.907662   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:36.907668   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:36.907674   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:36 GMT
	I0108 12:53:36.907679   10230 round_trippers.go:580]     Audit-Id: 6c18c6b2-f6fb-4699-bd17-c3ffe88c9c1d
	I0108 12:53:36.907684   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:36.907689   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:36.907798   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:37.400376   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:37.400389   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:37.400396   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:37.400401   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:37.403179   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:37.403189   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:37.403194   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:37 GMT
	I0108 12:53:37.403199   10230 round_trippers.go:580]     Audit-Id: a82c387e-7a7c-4d8e-9714-e93d04052de5
	I0108 12:53:37.403204   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:37.403211   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:37.403216   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:37.403222   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:37.403371   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:37.403649   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:37.403655   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:37.403661   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:37.403667   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:37.405779   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:37.405788   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:37.405793   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:37.405798   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:37.405803   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:37.405808   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:37 GMT
	I0108 12:53:37.405813   10230 round_trippers.go:580]     Audit-Id: 6ba26cba-1a4c-414d-bddb-a5016d70656d
	I0108 12:53:37.405818   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:37.405861   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:37.902182   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:37.902209   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:37.902221   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:37.902231   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:37.906328   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:37.906344   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:37.906351   10230 round_trippers.go:580]     Audit-Id: 96ceaeb9-0464-486e-96b7-1967a2de9ffb
	I0108 12:53:37.906359   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:37.906366   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:37.906372   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:37.906379   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:37.906386   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:37 GMT
	I0108 12:53:37.906443   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:37.906832   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:37.906848   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:37.906861   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:37.906870   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:37.909075   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:37.909084   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:37.909090   10230 round_trippers.go:580]     Audit-Id: e526a7e8-9132-470f-9768-2c1b3326ef4a
	I0108 12:53:37.909097   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:37.909103   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:37.909108   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:37.909112   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:37.909118   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:37 GMT
	I0108 12:53:37.909158   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:38.400935   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:38.400958   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:38.400971   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:38.400981   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:38.405556   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:38.405570   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:38.405575   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:38 GMT
	I0108 12:53:38.405580   10230 round_trippers.go:580]     Audit-Id: 2cee152e-027d-48d2-9731-73c114809b15
	I0108 12:53:38.405585   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:38.405590   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:38.405594   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:38.405599   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:38.405643   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:38.405926   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:38.405933   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:38.405939   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:38.405945   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:38.408013   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:38.408021   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:38.408027   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:38.408032   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:38.408037   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:38.408041   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:38 GMT
	I0108 12:53:38.408047   10230 round_trippers.go:580]     Audit-Id: 8f0b62a2-7f58-47b5-86eb-f5b7c9e9da00
	I0108 12:53:38.408051   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:38.408278   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:38.408461   10230 pod_ready.go:102] pod "coredns-565d847f94-f6gqj" in "kube-system" namespace has status "Ready":"False"
	I0108 12:53:38.901498   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:38.901523   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:38.901536   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:38.901546   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:38.906016   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:38.906034   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:38.906041   10230 round_trippers.go:580]     Audit-Id: 3b0a0f4f-1cd8-4aaa-8b45-20414057729a
	I0108 12:53:38.906046   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:38.906050   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:38.906056   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:38.906060   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:38.906066   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:38 GMT
	I0108 12:53:38.906116   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:38.906411   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:38.906419   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:38.906425   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:38.906434   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:38.908573   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:38.908582   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:38.908587   10230 round_trippers.go:580]     Audit-Id: 6e078755-f6e0-469c-8ab8-9f439f630b2a
	I0108 12:53:38.908591   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:38.908596   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:38.908601   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:38.908606   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:38.908612   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:38 GMT
	I0108 12:53:38.908654   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:39.401613   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:39.401635   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:39.401648   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:39.401658   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:39.405811   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:39.405826   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:39.405834   10230 round_trippers.go:580]     Audit-Id: b0bef3d4-de30-4c19-9c8c-c6f17140861b
	I0108 12:53:39.405841   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:39.405848   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:39.405854   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:39.405860   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:39.405868   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:39 GMT
	I0108 12:53:39.405922   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:39.406245   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:39.406252   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:39.406259   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:39.406265   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:39.408311   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:39.408320   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:39.408326   10230 round_trippers.go:580]     Audit-Id: 31f9f27e-fc5b-4822-8df7-efac85e9a5a2
	I0108 12:53:39.408331   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:39.408336   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:39.408341   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:39.408346   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:39.408351   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:39 GMT
	I0108 12:53:39.408394   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:39.900415   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:39.900441   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:39.900453   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:39.900463   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:39.905021   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:39.905033   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:39.905039   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:39.905044   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:39.905049   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:39 GMT
	I0108 12:53:39.905053   10230 round_trippers.go:580]     Audit-Id: ca7d8f3f-7e7d-4334-b808-bc4758934825
	I0108 12:53:39.905058   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:39.905076   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:39.905129   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"799","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6552 chars]
	I0108 12:53:39.905407   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:39.905414   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:39.905420   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:39.905425   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:39.907745   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:39.907755   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:39.907761   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:39.907766   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:39.907771   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:39.907777   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:39.907782   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:39 GMT
	I0108 12:53:39.907787   10230 round_trippers.go:580]     Audit-Id: 139d5f1c-ca50-4027-b022-fb51f0c34374
	I0108 12:53:39.907829   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:39.908007   10230 pod_ready.go:92] pod "coredns-565d847f94-f6gqj" in "kube-system" namespace has status "Ready":"True"
	I0108 12:53:39.908017   10230 pod_ready.go:81] duration metric: took 35.013428327s waiting for pod "coredns-565d847f94-f6gqj" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:39.908025   10230 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-124908" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:39.908052   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/etcd-multinode-124908
	I0108 12:53:39.908057   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:39.908063   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:39.908069   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:39.909982   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:39.909991   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:39.909996   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:39.910000   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:39.910006   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:39.910011   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:39 GMT
	I0108 12:53:39.910016   10230 round_trippers.go:580]     Audit-Id: 929ff0ce-2e47-41ea-af1d-0d8d3e9f78d3
	I0108 12:53:39.910021   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:39.910198   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-124908","namespace":"kube-system","uid":"9cf1a608-48d9-453e-bd35-263521e756e4","resourceVersion":"742","creationTimestamp":"2023-01-08T20:49:35Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"83cad18480e9029408294e1fc4223245","kubernetes.io/config.mirror":"83cad18480e9029408294e1fc4223245","kubernetes.io/config.seen":"2023-01-08T20:49:35.642390520Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6045 chars]
	I0108 12:53:39.910412   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:39.910419   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:39.910425   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:39.910430   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:39.912602   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:39.912612   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:39.912617   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:39.912623   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:39.912628   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:39.912633   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:39.912638   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:39 GMT
	I0108 12:53:39.912643   10230 round_trippers.go:580]     Audit-Id: 8f3b5930-6eb7-44e4-85dd-3d7b9e59997d
	I0108 12:53:39.912686   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:39.912857   10230 pod_ready.go:92] pod "etcd-multinode-124908" in "kube-system" namespace has status "Ready":"True"
	I0108 12:53:39.912864   10230 pod_ready.go:81] duration metric: took 4.833522ms waiting for pod "etcd-multinode-124908" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:39.912874   10230 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-124908" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:39.912898   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-124908
	I0108 12:53:39.912903   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:39.912909   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:39.912914   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:39.914700   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:39.914708   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:39.914714   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:39 GMT
	I0108 12:53:39.914720   10230 round_trippers.go:580]     Audit-Id: 5940fd87-5eb4-48d5-b97c-159d73e1ddd1
	I0108 12:53:39.914725   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:39.914730   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:39.914735   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:39.914740   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:39.914783   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-124908","namespace":"kube-system","uid":"7e7e7fa5-c965-4737-83b1-afd48eb87547","resourceVersion":"779","creationTimestamp":"2023-01-08T20:49:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"7e3bdd07923da057548f2016d7097374","kubernetes.io/config.mirror":"7e3bdd07923da057548f2016d7097374","kubernetes.io/config.seen":"2023-01-08T20:49:35.642400230Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8429 chars]
	I0108 12:53:39.915025   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:39.915031   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:39.915037   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:39.915042   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:39.917208   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:39.917217   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:39.917223   10230 round_trippers.go:580]     Audit-Id: e3eeb255-3631-4b52-93ec-42619046ee39
	I0108 12:53:39.917229   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:39.917234   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:39.917240   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:39.917245   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:39.917250   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:39 GMT
	I0108 12:53:39.917283   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:39.917452   10230 pod_ready.go:92] pod "kube-apiserver-multinode-124908" in "kube-system" namespace has status "Ready":"True"
	I0108 12:53:39.917458   10230 pod_ready.go:81] duration metric: took 4.579502ms waiting for pod "kube-apiserver-multinode-124908" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:39.917464   10230 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-124908" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:39.917489   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-124908
	I0108 12:53:39.917493   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:39.917499   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:39.917505   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:39.919541   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:39.919550   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:39.919556   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:39.919561   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:39.919566   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:39.919571   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:39.919576   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:39 GMT
	I0108 12:53:39.919581   10230 round_trippers.go:580]     Audit-Id: 1ca36a59-2bf7-49fe-b32d-68df43c21004
	I0108 12:53:39.919647   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-124908","namespace":"kube-system","uid":"41ff8cf2-6b35-47c2-8f48-120e6adf98bb","resourceVersion":"763","creationTimestamp":"2023-01-08T20:49:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d5faaebc8229ee8bf257c9d1c46ead3c","kubernetes.io/config.mirror":"d5faaebc8229ee8bf257c9d1c46ead3c","kubernetes.io/config.seen":"2023-01-08T20:49:35.642401085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8002 chars]
	I0108 12:53:39.919900   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:39.919907   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:39.919912   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:39.919918   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:39.921869   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:39.921877   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:39.921883   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:39.921888   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:39.921893   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:39.921898   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:39 GMT
	I0108 12:53:39.921903   10230 round_trippers.go:580]     Audit-Id: c206783d-4aee-4044-8b7a-72748141441a
	I0108 12:53:39.921908   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:39.921950   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:39.922117   10230 pod_ready.go:92] pod "kube-controller-manager-multinode-124908" in "kube-system" namespace has status "Ready":"True"
	I0108 12:53:39.922123   10230 pod_ready.go:81] duration metric: took 4.654475ms waiting for pod "kube-controller-manager-multinode-124908" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:39.922130   10230 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hq6ms" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:39.922153   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-proxy-hq6ms
	I0108 12:53:39.922157   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:39.922163   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:39.922169   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:39.924039   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:39.924050   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:39.924055   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:39 GMT
	I0108 12:53:39.924060   10230 round_trippers.go:580]     Audit-Id: 5eb736fb-5c2f-4264-9215-1f9c6cf5eafc
	I0108 12:53:39.924066   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:39.924072   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:39.924078   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:39.924083   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:39.924200   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hq6ms","generateName":"kube-proxy-","namespace":"kube-system","uid":"3deaa832-bac0-47e3-bdef-482b094bf90f","resourceVersion":"669","creationTimestamp":"2023-01-08T20:51:09Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ceebf5ed-bacc-4cbe-87e3-48c583ee7679","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:51:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ceebf5ed-bacc-4cbe-87e3-48c583ee7679\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5743 chars]
	I0108 12:53:39.924424   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908-m03
	I0108 12:53:39.924430   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:39.924436   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:39.924442   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:39.926155   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:39.926163   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:39.926168   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:39.926174   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:39.926179   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:39.926184   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:39 GMT
	I0108 12:53:39.926189   10230 round_trippers.go:580]     Audit-Id: 5a8bffc7-37c1-43be-833c-a4a9701f0551
	I0108 12:53:39.926193   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:39.926228   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908-m03","uid":"00d677bd-1b22-4d63-8258-31e7e0d73f15","resourceVersion":"756","creationTimestamp":"2023-01-08T20:51:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:51:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:51:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 4321 chars]
	I0108 12:53:39.926376   10230 pod_ready.go:92] pod "kube-proxy-hq6ms" in "kube-system" namespace has status "Ready":"True"
	I0108 12:53:39.926382   10230 pod_ready.go:81] duration metric: took 4.247857ms waiting for pod "kube-proxy-hq6ms" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:39.926387   10230 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kzv6k" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:40.100709   10230 request.go:614] Waited for 174.201687ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-proxy-kzv6k
	I0108 12:53:40.100756   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-proxy-kzv6k
	I0108 12:53:40.100765   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:40.100778   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:40.100793   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:40.104855   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:40.104873   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:40.104884   10230 round_trippers.go:580]     Audit-Id: 70876165-774a-4af2-9101-51aa2bd6cb4a
	I0108 12:53:40.104893   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:40.104899   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:40.104918   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:40.104928   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:40.104934   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:40 GMT
	I0108 12:53:40.105152   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kzv6k","generateName":"kube-proxy-","namespace":"kube-system","uid":"05a4b261-aa83-4e23-83c6-0a50d659b5b7","resourceVersion":"705","creationTimestamp":"2023-01-08T20:49:47Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ceebf5ed-bacc-4cbe-87e3-48c583ee7679","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ceebf5ed-bacc-4cbe-87e3-48c583ee7679\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5735 chars]
	I0108 12:53:40.300810   10230 request.go:614] Waited for 195.309578ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:40.300857   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:40.300865   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:40.300877   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:40.300891   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:40.305027   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:40.305043   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:40.305051   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:40.305057   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:40.305064   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:40.305071   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:40.305077   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:40 GMT
	I0108 12:53:40.305084   10230 round_trippers.go:580]     Audit-Id: f205480e-b823-4e1a-9974-49498e281dc4
	I0108 12:53:40.305146   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:40.305393   10230 pod_ready.go:92] pod "kube-proxy-kzv6k" in "kube-system" namespace has status "Ready":"True"
	I0108 12:53:40.305401   10230 pod_ready.go:81] duration metric: took 379.012876ms waiting for pod "kube-proxy-kzv6k" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:40.305407   10230 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vx6bb" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:40.500423   10230 request.go:614] Waited for 194.974989ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-proxy-vx6bb
	I0108 12:53:40.500474   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-proxy-vx6bb
	I0108 12:53:40.500484   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:40.500527   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:40.500541   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:40.504463   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:40.504475   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:40.504480   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:40.504486   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:40.504491   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:40 GMT
	I0108 12:53:40.504496   10230 round_trippers.go:580]     Audit-Id: e4bdaf27-28e7-4a88-8a67-367a28d94b6f
	I0108 12:53:40.504501   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:40.504505   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:40.504560   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vx6bb","generateName":"kube-proxy-","namespace":"kube-system","uid":"7bff7041-dbf7-4143-9f70-52a12dd69f64","resourceVersion":"467","creationTimestamp":"2023-01-08T20:50:25Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ceebf5ed-bacc-4cbe-87e3-48c583ee7679","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:50:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ceebf5ed-bacc-4cbe-87e3-48c583ee7679\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5536 chars]
	I0108 12:53:40.700980   10230 request.go:614] Waited for 196.055266ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/nodes/multinode-124908-m02
	I0108 12:53:40.701033   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908-m02
	I0108 12:53:40.701041   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:40.701055   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:40.701069   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:40.705141   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:40.705156   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:40.705164   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:40.705171   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:40 GMT
	I0108 12:53:40.705177   10230 round_trippers.go:580]     Audit-Id: e07f4dad-e953-4aea-901b-09a4dcaadc47
	I0108 12:53:40.705184   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:40.705191   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:40.705198   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:40.705259   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908-m02","uid":"06778a45-7a2c-401b-918a-d4864150c87c","resourceVersion":"587","creationTimestamp":"2023-01-08T20:50:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:50:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4504 chars]
	I0108 12:53:40.705476   10230 pod_ready.go:92] pod "kube-proxy-vx6bb" in "kube-system" namespace has status "Ready":"True"
	I0108 12:53:40.705483   10230 pod_ready.go:81] duration metric: took 400.076367ms waiting for pod "kube-proxy-vx6bb" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:40.705490   10230 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-124908" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:40.900487   10230 request.go:614] Waited for 194.956523ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-124908
	I0108 12:53:40.900538   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-124908
	I0108 12:53:40.900546   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:40.900590   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:40.900611   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:40.904855   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:40.904882   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:40.904890   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:40.904898   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:40 GMT
	I0108 12:53:40.904905   10230 round_trippers.go:580]     Audit-Id: 8226953c-70d7-4e9e-a22b-8e5bb441aa2b
	I0108 12:53:40.904912   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:40.904919   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:40.904926   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:40.905001   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-124908","namespace":"kube-system","uid":"3dd0df78-6cad-4b47-a66f-74c412846b79","resourceVersion":"775","creationTimestamp":"2023-01-08T20:49:35Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"165a046b58d2e71b3de2a638cd49c0fb","kubernetes.io/config.mirror":"165a046b58d2e71b3de2a638cd49c0fb","kubernetes.io/config.seen":"2023-01-08T20:49:35.642401740Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4886 chars]
	I0108 12:53:41.101227   10230 request.go:614] Waited for 195.942208ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:41.101279   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:41.101318   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:41.101335   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:41.101368   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:41.106194   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:41.106210   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:41.106217   10230 round_trippers.go:580]     Audit-Id: f6a698be-b446-42e2-ae8f-284dde2ec675
	I0108 12:53:41.106224   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:41.106231   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:41.106241   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:41.106248   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:41.106256   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:41 GMT
	I0108 12:53:41.106331   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:41.106604   10230 pod_ready.go:92] pod "kube-scheduler-multinode-124908" in "kube-system" namespace has status "Ready":"True"
	I0108 12:53:41.106614   10230 pod_ready.go:81] duration metric: took 401.124729ms waiting for pod "kube-scheduler-multinode-124908" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:41.106623   10230 pod_ready.go:38] duration metric: took 36.220178758s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 12:53:41.106637   10230 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 12:53:41.115391   10230 command_runner.go:130] > -16
	I0108 12:53:41.115410   10230 ops.go:34] apiserver oom_adj: -16
	I0108 12:53:41.115416   10230 kubeadm.go:631] restartCluster took 47.698232771s
	I0108 12:53:41.115420   10230 kubeadm.go:398] StartCluster complete in 47.729216191s
	I0108 12:53:41.115433   10230 settings.go:142] acquiring lock: {Name:mkc40aeb9f069e96cc5c51255984662f0292a058 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 12:53:41.115513   10230 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15565-2761/kubeconfig
	I0108 12:53:41.115873   10230 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/kubeconfig: {Name:mk71550ab701dee908d8134473648649a6392238 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 12:53:41.116248   10230 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15565-2761/kubeconfig
	I0108 12:53:41.116413   10230 kapi.go:59] client config for multinode-124908: &rest.Config{Host:"https://127.0.0.1:51399", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 12:53:41.116611   10230 round_trippers.go:463] GET https://127.0.0.1:51399/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 12:53:41.116618   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:41.116624   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:41.116630   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:41.119172   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:41.119182   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:41.119188   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:41.119193   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:41.119198   10230 round_trippers.go:580]     Content-Length: 291
	I0108 12:53:41.119203   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:41 GMT
	I0108 12:53:41.119208   10230 round_trippers.go:580]     Audit-Id: 0a158930-6b2a-4180-92e2-c79cf87322d4
	I0108 12:53:41.119212   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:41.119218   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:41.119231   10230 request.go:1154] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"810f231a-a12d-46cc-94f1-efc567a0161a","resourceVersion":"803","creationTimestamp":"2023-01-08T20:49:35Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0108 12:53:41.119319   10230 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-124908" rescaled to 1
	I0108 12:53:41.119346   10230 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0108 12:53:41.119353   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 12:53:41.119379   10230 addons.go:486] enableAddons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
	I0108 12:53:41.119557   10230 config.go:180] Loaded profile config "multinode-124908": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 12:53:41.160646   10230 addons.go:65] Setting storage-provisioner=true in profile "multinode-124908"
	I0108 12:53:41.160589   10230 out.go:177] * Verifying Kubernetes components...
	I0108 12:53:41.160649   10230 addons.go:65] Setting default-storageclass=true in profile "multinode-124908"
	I0108 12:53:41.160680   10230 addons.go:227] Setting addon storage-provisioner=true in "multinode-124908"
	I0108 12:53:41.181798   10230 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-124908"
	W0108 12:53:41.181808   10230 addons.go:236] addon storage-provisioner should already be in state true
	I0108 12:53:41.181815   10230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 12:53:41.175293   10230 command_runner.go:130] > apiVersion: v1
	I0108 12:53:41.181834   10230 command_runner.go:130] > data:
	I0108 12:53:41.181842   10230 command_runner.go:130] >   Corefile: |
	I0108 12:53:41.181846   10230 command_runner.go:130] >     .:53 {
	I0108 12:53:41.181851   10230 command_runner.go:130] >         errors
	I0108 12:53:41.181856   10230 command_runner.go:130] >         health {
	I0108 12:53:41.181857   10230 host.go:66] Checking if "multinode-124908" exists ...
	I0108 12:53:41.181869   10230 command_runner.go:130] >            lameduck 5s
	I0108 12:53:41.181873   10230 command_runner.go:130] >         }
	I0108 12:53:41.181876   10230 command_runner.go:130] >         ready
	I0108 12:53:41.181882   10230 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0108 12:53:41.181887   10230 command_runner.go:130] >            pods insecure
	I0108 12:53:41.181908   10230 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0108 12:53:41.181916   10230 command_runner.go:130] >            ttl 30
	I0108 12:53:41.181923   10230 command_runner.go:130] >         }
	I0108 12:53:41.181942   10230 command_runner.go:130] >         prometheus :9153
	I0108 12:53:41.181959   10230 command_runner.go:130] >         hosts {
	I0108 12:53:41.181964   10230 command_runner.go:130] >            192.168.65.2 host.minikube.internal
	I0108 12:53:41.181968   10230 command_runner.go:130] >            fallthrough
	I0108 12:53:41.181975   10230 command_runner.go:130] >         }
	I0108 12:53:41.181980   10230 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0108 12:53:41.181984   10230 command_runner.go:130] >            max_concurrent 1000
	I0108 12:53:41.181988   10230 command_runner.go:130] >         }
	I0108 12:53:41.181991   10230 command_runner.go:130] >         cache 30
	I0108 12:53:41.181995   10230 command_runner.go:130] >         loop
	I0108 12:53:41.181999   10230 command_runner.go:130] >         reload
	I0108 12:53:41.182003   10230 command_runner.go:130] >         loadbalance
	I0108 12:53:41.182006   10230 command_runner.go:130] >     }
	I0108 12:53:41.182010   10230 command_runner.go:130] > kind: ConfigMap
	I0108 12:53:41.182013   10230 command_runner.go:130] > metadata:
	I0108 12:53:41.182017   10230 command_runner.go:130] >   creationTimestamp: "2023-01-08T20:49:35Z"
	I0108 12:53:41.182020   10230 command_runner.go:130] >   name: coredns
	I0108 12:53:41.182024   10230 command_runner.go:130] >   namespace: kube-system
	I0108 12:53:41.182027   10230 command_runner.go:130] >   resourceVersion: "367"
	I0108 12:53:41.182031   10230 command_runner.go:130] >   uid: 42630cd3-ff72-40ae-bd48-b7a868baf4b9
	I0108 12:53:41.182113   10230 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0108 12:53:41.182134   10230 cli_runner.go:164] Run: docker container inspect multinode-124908 --format={{.State.Status}}
	I0108 12:53:41.182199   10230 cli_runner.go:164] Run: docker container inspect multinode-124908 --format={{.State.Status}}
	I0108 12:53:41.193197   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-124908
	I0108 12:53:41.249743   10230 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15565-2761/kubeconfig
	I0108 12:53:41.271021   10230 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 12:53:41.271307   10230 kapi.go:59] client config for multinode-124908: &rest.Config{Host:"https://127.0.0.1:51399", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 12:53:41.291866   10230 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 12:53:41.291884   10230 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 12:53:41.292021   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908
	I0108 12:53:41.292185   10230 round_trippers.go:463] GET https://127.0.0.1:51399/apis/storage.k8s.io/v1/storageclasses
	I0108 12:53:41.292198   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:41.292211   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:41.292248   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:41.297081   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:41.297107   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:41.297116   10230 round_trippers.go:580]     Audit-Id: 1428949f-b14e-4e70-b573-42b12a95cf1a
	I0108 12:53:41.297123   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:41.297130   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:41.297135   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:41.297140   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:41.297145   10230 round_trippers.go:580]     Content-Length: 1273
	I0108 12:53:41.297149   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:41 GMT
	I0108 12:53:41.297217   10230 request.go:1154] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"803"},"items":[{"metadata":{"name":"standard","uid":"b0361e2f-3ac8-4575-88dc-aebe0c85a19d","resourceVersion":"376","creationTimestamp":"2023-01-08T20:49:50Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-01-08T20:49:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0108 12:53:41.297637   10230 request.go:1154] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b0361e2f-3ac8-4575-88dc-aebe0c85a19d","resourceVersion":"376","creationTimestamp":"2023-01-08T20:49:50Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-01-08T20:49:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0108 12:53:41.297674   10230 round_trippers.go:463] PUT https://127.0.0.1:51399/apis/storage.k8s.io/v1/storageclasses/standard
	I0108 12:53:41.297679   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:41.297685   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:41.297690   10230 round_trippers.go:473]     Content-Type: application/json
	I0108 12:53:41.297696   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:41.299832   10230 node_ready.go:35] waiting up to 6m0s for node "multinode-124908" to be "Ready" ...
	I0108 12:53:41.300442   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:41.300456   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:41.300477   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:41.300489   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:41.301281   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:41.301291   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:41.301297   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:41.301303   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:41.301309   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:41.301314   10230 round_trippers.go:580]     Content-Length: 1220
	I0108 12:53:41.301319   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:41 GMT
	I0108 12:53:41.301324   10230 round_trippers.go:580]     Audit-Id: ac230961-8edd-4176-9590-1a127a759830
	I0108 12:53:41.301333   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:41.301357   10230 request.go:1154] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b0361e2f-3ac8-4575-88dc-aebe0c85a19d","resourceVersion":"376","creationTimestamp":"2023-01-08T20:49:50Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-01-08T20:49:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0108 12:53:41.301435   10230 addons.go:227] Setting addon default-storageclass=true in "multinode-124908"
	W0108 12:53:41.301445   10230 addons.go:236] addon default-storageclass should already be in state true
	I0108 12:53:41.301469   10230 host.go:66] Checking if "multinode-124908" exists ...
	I0108 12:53:41.301877   10230 cli_runner.go:164] Run: docker container inspect multinode-124908 --format={{.State.Status}}
	I0108 12:53:41.303892   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:41.303920   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:41.303929   10230 round_trippers.go:580]     Audit-Id: bb8ec0c2-2d28-41fc-bbf7-ee009aa8292a
	I0108 12:53:41.303937   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:41.303952   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:41.303957   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:41.303962   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:41.303983   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:41 GMT
	I0108 12:53:41.304115   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:41.304418   10230 node_ready.go:49] node "multinode-124908" has status "Ready":"True"
	I0108 12:53:41.304427   10230 node_ready.go:38] duration metric: took 4.579674ms waiting for node "multinode-124908" to be "Ready" ...
	I0108 12:53:41.304436   10230 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 12:53:41.356520   10230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51400 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/multinode-124908/id_rsa Username:docker}
	I0108 12:53:41.361820   10230 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 12:53:41.361832   10230 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 12:53:41.361914   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908
	I0108 12:53:41.420640   10230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51400 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/multinode-124908/id_rsa Username:docker}
	I0108 12:53:41.447685   10230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 12:53:41.500655   10230 request.go:614] Waited for 196.170537ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods
	I0108 12:53:41.500693   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods
	I0108 12:53:41.500699   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:41.500705   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:41.500712   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:41.504471   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:41.504493   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:41.504501   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:41.504508   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:41.504515   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:41 GMT
	I0108 12:53:41.504522   10230 round_trippers.go:580]     Audit-Id: 1903d581-7a72-4925-95b4-95eb1d8d8661
	I0108 12:53:41.504529   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:41.504540   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:41.507886   10230 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"803"},"items":[{"metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"799","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84955 chars]
	I0108 12:53:41.510464   10230 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-f6gqj" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:41.513293   10230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 12:53:41.682063   10230 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0108 12:53:41.684828   10230 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0108 12:53:41.687336   10230 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0108 12:53:41.689470   10230 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0108 12:53:41.691409   10230 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0108 12:53:41.702431   10230 request.go:614] Waited for 191.926096ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:41.702492   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:41.702499   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:41.702508   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:41.702516   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:41.737837   10230 round_trippers.go:574] Response Status: 200 OK in 35 milliseconds
	I0108 12:53:41.737859   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:41.737870   10230 round_trippers.go:580]     Audit-Id: c3ee1736-1ec3-4ce1-9bae-c53ccdf0e2cb
	I0108 12:53:41.737884   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:41.737894   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:41.737906   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:41.737915   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:41.737925   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:41 GMT
	I0108 12:53:41.738592   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"799","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6552 chars]
	I0108 12:53:41.738690   10230 command_runner.go:130] > pod/storage-provisioner configured
	I0108 12:53:41.764480   10230 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0108 12:53:41.793895   10230 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0108 12:53:41.836360   10230 addons.go:488] enableAddons completed in 716.99416ms
	I0108 12:53:41.901337   10230 request.go:614] Waited for 162.271701ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:41.901431   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:41.901440   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:41.901453   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:41.901463   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:41.905891   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:41.905906   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:41.905915   10230 round_trippers.go:580]     Audit-Id: 4b73074a-5b37-4e19-969a-f0e4a6534ce4
	I0108 12:53:41.905935   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:41.905940   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:41.905945   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:41.905950   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:41.905955   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:41 GMT
	I0108 12:53:41.906031   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:41.906248   10230 pod_ready.go:92] pod "coredns-565d847f94-f6gqj" in "kube-system" namespace has status "Ready":"True"
	I0108 12:53:41.906271   10230 pod_ready.go:81] duration metric: took 395.782805ms waiting for pod "coredns-565d847f94-f6gqj" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:41.906277   10230 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-124908" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:42.100909   10230 request.go:614] Waited for 194.549653ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/etcd-multinode-124908
	I0108 12:53:42.100973   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/etcd-multinode-124908
	I0108 12:53:42.100983   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:42.100998   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:42.101012   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:42.104923   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:42.104943   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:42.104954   10230 round_trippers.go:580]     Audit-Id: 8a6a6b81-b12e-4642-8e80-283d5924fa8c
	I0108 12:53:42.104977   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:42.104982   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:42.104988   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:42.104994   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:42.105000   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:42 GMT
	I0108 12:53:42.105075   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-124908","namespace":"kube-system","uid":"9cf1a608-48d9-453e-bd35-263521e756e4","resourceVersion":"742","creationTimestamp":"2023-01-08T20:49:35Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"83cad18480e9029408294e1fc4223245","kubernetes.io/config.mirror":"83cad18480e9029408294e1fc4223245","kubernetes.io/config.seen":"2023-01-08T20:49:35.642390520Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6045 chars]
	I0108 12:53:42.300421   10230 request.go:614] Waited for 195.058095ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:42.300524   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:42.300538   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:42.300551   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:42.300567   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:42.304869   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:42.304881   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:42.304887   10230 round_trippers.go:580]     Audit-Id: 931a86dc-9c69-4693-9280-45a9e62b9a66
	I0108 12:53:42.304892   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:42.304897   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:42.304902   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:42.304907   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:42.304912   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:42 GMT
	I0108 12:53:42.304989   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:42.305207   10230 pod_ready.go:92] pod "etcd-multinode-124908" in "kube-system" namespace has status "Ready":"True"
	I0108 12:53:42.305214   10230 pod_ready.go:81] duration metric: took 398.937565ms waiting for pod "etcd-multinode-124908" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:42.305232   10230 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-124908" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:42.502373   10230 request.go:614] Waited for 197.099033ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-124908
	I0108 12:53:42.502428   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-124908
	I0108 12:53:42.502437   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:42.502454   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:42.502487   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:42.506954   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:42.506971   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:42.506977   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:42.506984   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:42.506989   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:42.506995   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:42 GMT
	I0108 12:53:42.506999   10230 round_trippers.go:580]     Audit-Id: 74d01110-1212-41f6-885d-19ae1aad79e8
	I0108 12:53:42.507004   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:42.507084   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-124908","namespace":"kube-system","uid":"7e7e7fa5-c965-4737-83b1-afd48eb87547","resourceVersion":"779","creationTimestamp":"2023-01-08T20:49:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"7e3bdd07923da057548f2016d7097374","kubernetes.io/config.mirror":"7e3bdd07923da057548f2016d7097374","kubernetes.io/config.seen":"2023-01-08T20:49:35.642400230Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8429 chars]
	I0108 12:53:42.700699   10230 request.go:614] Waited for 193.313701ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:42.700764   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:42.700776   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:42.700788   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:42.700798   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:42.705107   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:42.705123   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:42.705131   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:42.705138   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:42.705146   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:42.705153   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:42 GMT
	I0108 12:53:42.705159   10230 round_trippers.go:580]     Audit-Id: a51a02b8-1bba-4753-819b-86dc4a494c6a
	I0108 12:53:42.705165   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:42.705243   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:42.705485   10230 pod_ready.go:92] pod "kube-apiserver-multinode-124908" in "kube-system" namespace has status "Ready":"True"
	I0108 12:53:42.705491   10230 pod_ready.go:81] duration metric: took 400.256787ms waiting for pod "kube-apiserver-multinode-124908" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:42.705498   10230 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-124908" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:42.902435   10230 request.go:614] Waited for 196.87458ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-124908
	I0108 12:53:42.902509   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-124908
	I0108 12:53:42.902519   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:42.902531   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:42.902542   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:42.906293   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:42.906312   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:42.906320   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:42.906331   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:42.906339   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:42 GMT
	I0108 12:53:42.906346   10230 round_trippers.go:580]     Audit-Id: 161d2899-28ff-444f-b1e8-fbd0a3430b66
	I0108 12:53:42.906367   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:42.906371   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:42.906691   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-124908","namespace":"kube-system","uid":"41ff8cf2-6b35-47c2-8f48-120e6adf98bb","resourceVersion":"763","creationTimestamp":"2023-01-08T20:49:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d5faaebc8229ee8bf257c9d1c46ead3c","kubernetes.io/config.mirror":"d5faaebc8229ee8bf257c9d1c46ead3c","kubernetes.io/config.seen":"2023-01-08T20:49:35.642401085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8002 chars]
	I0108 12:53:43.100703   10230 request.go:614] Waited for 193.667903ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:43.100770   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:43.100778   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:43.100790   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:43.100802   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:43.105234   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:43.105245   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:43.105251   10230 round_trippers.go:580]     Audit-Id: e4c6d05f-c5f1-494d-a339-a2849a6c8bc9
	I0108 12:53:43.105261   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:43.105266   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:43.105271   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:43.105276   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:43.105281   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:43 GMT
	I0108 12:53:43.105350   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:43.105553   10230 pod_ready.go:92] pod "kube-controller-manager-multinode-124908" in "kube-system" namespace has status "Ready":"True"
	I0108 12:53:43.105561   10230 pod_ready.go:81] duration metric: took 400.062598ms waiting for pod "kube-controller-manager-multinode-124908" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:43.105568   10230 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hq6ms" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:43.302414   10230 request.go:614] Waited for 196.796115ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-proxy-hq6ms
	I0108 12:53:43.302510   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-proxy-hq6ms
	I0108 12:53:43.302546   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:43.302560   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:43.302573   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:43.307087   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:43.307100   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:43.307106   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:43 GMT
	I0108 12:53:43.307114   10230 round_trippers.go:580]     Audit-Id: 2387e544-3b0a-4e3f-ae9e-dba95cda9f00
	I0108 12:53:43.307120   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:43.307126   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:43.307131   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:43.307135   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:43.307184   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hq6ms","generateName":"kube-proxy-","namespace":"kube-system","uid":"3deaa832-bac0-47e3-bdef-482b094bf90f","resourceVersion":"669","creationTimestamp":"2023-01-08T20:51:09Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ceebf5ed-bacc-4cbe-87e3-48c583ee7679","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:51:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ceebf5ed-bacc-4cbe-87e3-48c583ee7679\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5743 chars]
	I0108 12:53:43.501055   10230 request.go:614] Waited for 193.5622ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/nodes/multinode-124908-m03
	I0108 12:53:43.501110   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908-m03
	I0108 12:53:43.501118   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:43.501132   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:43.501145   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:43.505291   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:43.505318   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:43.505324   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:43.505328   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:43 GMT
	I0108 12:53:43.505333   10230 round_trippers.go:580]     Audit-Id: 5957559b-79e2-4dc0-8c3b-52b24355a1ac
	I0108 12:53:43.505340   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:43.505345   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:43.505350   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:43.505444   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908-m03","uid":"00d677bd-1b22-4d63-8258-31e7e0d73f15","resourceVersion":"756","creationTimestamp":"2023-01-08T20:51:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:51:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:51:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 4321 chars]
	I0108 12:53:43.505651   10230 pod_ready.go:92] pod "kube-proxy-hq6ms" in "kube-system" namespace has status "Ready":"True"
	I0108 12:53:43.505672   10230 pod_ready.go:81] duration metric: took 400.091366ms waiting for pod "kube-proxy-hq6ms" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:43.505682   10230 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kzv6k" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:43.701142   10230 request.go:614] Waited for 195.377111ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-proxy-kzv6k
	I0108 12:53:43.701246   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-proxy-kzv6k
	I0108 12:53:43.701258   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:43.701270   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:43.701280   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:43.705772   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:43.705801   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:43.705807   10230 round_trippers.go:580]     Audit-Id: d867dddd-1ab7-4d7f-9efb-c519e160b01d
	I0108 12:53:43.705812   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:43.705817   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:43.705822   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:43.705827   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:43.705832   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:43 GMT
	I0108 12:53:43.705905   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kzv6k","generateName":"kube-proxy-","namespace":"kube-system","uid":"05a4b261-aa83-4e23-83c6-0a50d659b5b7","resourceVersion":"705","creationTimestamp":"2023-01-08T20:49:47Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ceebf5ed-bacc-4cbe-87e3-48c583ee7679","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ceebf5ed-bacc-4cbe-87e3-48c583ee7679\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5735 chars]
	I0108 12:53:43.901124   10230 request.go:614] Waited for 194.938036ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:43.901166   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:43.901174   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:43.901203   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:43.901211   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:43.903849   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:43.903864   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:43.903872   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:43.903879   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:43.903883   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:43.903890   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:43 GMT
	I0108 12:53:43.903895   10230 round_trippers.go:580]     Audit-Id: 8c78c216-a837-4f50-ba4e-5583ad57e448
	I0108 12:53:43.903900   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:43.903979   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:43.904197   10230 pod_ready.go:92] pod "kube-proxy-kzv6k" in "kube-system" namespace has status "Ready":"True"
	I0108 12:53:43.904204   10230 pod_ready.go:81] duration metric: took 398.522965ms waiting for pod "kube-proxy-kzv6k" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:43.904211   10230 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vx6bb" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:44.101943   10230 request.go:614] Waited for 197.606379ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-proxy-vx6bb
	I0108 12:53:44.101990   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-proxy-vx6bb
	I0108 12:53:44.101998   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:44.102010   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:44.102024   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:44.105863   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:44.105878   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:44.105893   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:44 GMT
	I0108 12:53:44.105902   10230 round_trippers.go:580]     Audit-Id: bbf0972b-ff91-49c7-a5e8-3bbcc67c6cbc
	I0108 12:53:44.105917   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:44.105924   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:44.105933   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:44.105941   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:44.106196   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vx6bb","generateName":"kube-proxy-","namespace":"kube-system","uid":"7bff7041-dbf7-4143-9f70-52a12dd69f64","resourceVersion":"467","creationTimestamp":"2023-01-08T20:50:25Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ceebf5ed-bacc-4cbe-87e3-48c583ee7679","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:50:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ceebf5ed-bacc-4cbe-87e3-48c583ee7679\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5536 chars]
	I0108 12:53:44.300491   10230 request.go:614] Waited for 193.961968ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/nodes/multinode-124908-m02
	I0108 12:53:44.300555   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908-m02
	I0108 12:53:44.300563   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:44.300577   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:44.300590   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:44.304851   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:44.304866   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:44.304874   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:44 GMT
	I0108 12:53:44.304881   10230 round_trippers.go:580]     Audit-Id: 39decd65-fe98-451f-92d9-49d138f96fdf
	I0108 12:53:44.304888   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:44.304895   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:44.304902   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:44.304910   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:44.304984   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908-m02","uid":"06778a45-7a2c-401b-918a-d4864150c87c","resourceVersion":"587","creationTimestamp":"2023-01-08T20:50:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:50:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4504 chars]
	I0108 12:53:44.305219   10230 pod_ready.go:92] pod "kube-proxy-vx6bb" in "kube-system" namespace has status "Ready":"True"
	I0108 12:53:44.305226   10230 pod_ready.go:81] duration metric: took 401.015806ms waiting for pod "kube-proxy-vx6bb" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:44.305234   10230 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-124908" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:44.500400   10230 request.go:614] Waited for 195.127146ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-124908
	I0108 12:53:44.500503   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-124908
	I0108 12:53:44.500514   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:44.500525   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:44.500536   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:44.504669   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:44.504681   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:44.504687   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:44.504692   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:44.504697   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:44.504703   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:44.504708   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:44 GMT
	I0108 12:53:44.504712   10230 round_trippers.go:580]     Audit-Id: 3bcb29be-4e6f-41c9-a112-4cd5f16ff2fa
	I0108 12:53:44.504774   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-124908","namespace":"kube-system","uid":"3dd0df78-6cad-4b47-a66f-74c412846b79","resourceVersion":"775","creationTimestamp":"2023-01-08T20:49:35Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"165a046b58d2e71b3de2a638cd49c0fb","kubernetes.io/config.mirror":"165a046b58d2e71b3de2a638cd49c0fb","kubernetes.io/config.seen":"2023-01-08T20:49:35.642401740Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4886 chars]
	I0108 12:53:44.701810   10230 request.go:614] Waited for 196.77847ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:44.701946   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:44.701956   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:44.701970   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:44.701982   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:44.706367   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:44.706378   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:44.706384   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:44 GMT
	I0108 12:53:44.706389   10230 round_trippers.go:580]     Audit-Id: cef8e304-7345-4cd4-80b3-d0b61b739847
	I0108 12:53:44.706398   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:44.706403   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:44.706409   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:44.706414   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:44.706491   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:44.706689   10230 pod_ready.go:92] pod "kube-scheduler-multinode-124908" in "kube-system" namespace has status "Ready":"True"
	I0108 12:53:44.706696   10230 pod_ready.go:81] duration metric: took 401.46151ms waiting for pod "kube-scheduler-multinode-124908" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:44.706703   10230 pod_ready.go:38] duration metric: took 3.40229807s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 12:53:44.706714   10230 api_server.go:51] waiting for apiserver process to appear ...
	I0108 12:53:44.706777   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 12:53:44.716228   10230 command_runner.go:130] > 1732
	I0108 12:53:44.716896   10230 api_server.go:71] duration metric: took 3.597583907s to wait for apiserver process to appear ...
	I0108 12:53:44.716908   10230 api_server.go:87] waiting for apiserver healthz status ...
	I0108 12:53:44.716914   10230 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51399/healthz ...
	I0108 12:53:44.722287   10230 api_server.go:278] https://127.0.0.1:51399/healthz returned 200:
	ok
	I0108 12:53:44.722325   10230 round_trippers.go:463] GET https://127.0.0.1:51399/version
	I0108 12:53:44.722330   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:44.722337   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:44.722343   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:44.723692   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:44.723700   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:44.723708   10230 round_trippers.go:580]     Content-Length: 263
	I0108 12:53:44.723713   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:44 GMT
	I0108 12:53:44.723718   10230 round_trippers.go:580]     Audit-Id: 95cb3a4f-fb50-4d2f-9a71-751e3f025983
	I0108 12:53:44.723723   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:44.723727   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:44.723732   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:44.723737   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:44.723746   10230 request.go:1154] Response Body: {
	  "major": "1",
	  "minor": "25",
	  "gitVersion": "v1.25.3",
	  "gitCommit": "434bfd82814af038ad94d62ebe59b133fcb50506",
	  "gitTreeState": "clean",
	  "buildDate": "2022-10-12T10:49:09Z",
	  "goVersion": "go1.19.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0108 12:53:44.723767   10230 api_server.go:140] control plane version: v1.25.3
	I0108 12:53:44.723774   10230 api_server.go:130] duration metric: took 6.86244ms to wait for apiserver health ...
	I0108 12:53:44.723782   10230 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 12:53:44.901887   10230 request.go:614] Waited for 178.017792ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods
	I0108 12:53:44.901944   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods
	I0108 12:53:44.901954   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:44.902000   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:44.902012   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:44.907621   10230 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 12:53:44.907638   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:44.907646   10230 round_trippers.go:580]     Audit-Id: 79ddedb0-5762-44dd-813a-53b57d52567d
	I0108 12:53:44.907652   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:44.907659   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:44.907665   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:44.907670   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:44.907676   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:44 GMT
	I0108 12:53:44.908713   10230 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"805"},"items":[{"metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"799","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84955 chars]
	I0108 12:53:44.910646   10230 system_pods.go:59] 12 kube-system pods found
	I0108 12:53:44.910656   10230 system_pods.go:61] "coredns-565d847f94-f6gqj" [1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7] Running
	I0108 12:53:44.910660   10230 system_pods.go:61] "etcd-multinode-124908" [9cf1a608-48d9-453e-bd35-263521e756e4] Running
	I0108 12:53:44.910665   10230 system_pods.go:61] "kindnet-4j92t" [2e0611f9-b324-4059-b858-ca1cc99bb8d9] Running
	I0108 12:53:44.910668   10230 system_pods.go:61] "kindnet-79h6s" [8899610c-9df6-488d-af2f-2848f1ce546b] Running
	I0108 12:53:44.910672   10230 system_pods.go:61] "kindnet-pj4l5" [82ac6efa-2268-472b-bd72-171778eabeb6] Running
	I0108 12:53:44.910675   10230 system_pods.go:61] "kube-apiserver-multinode-124908" [7e7e7fa5-c965-4737-83b1-afd48eb87547] Running
	I0108 12:53:44.910680   10230 system_pods.go:61] "kube-controller-manager-multinode-124908" [41ff8cf2-6b35-47c2-8f48-120e6adf98bb] Running
	I0108 12:53:44.910683   10230 system_pods.go:61] "kube-proxy-hq6ms" [3deaa832-bac0-47e3-bdef-482b094bf90f] Running
	I0108 12:53:44.910687   10230 system_pods.go:61] "kube-proxy-kzv6k" [05a4b261-aa83-4e23-83c6-0a50d659b5b7] Running
	I0108 12:53:44.910692   10230 system_pods.go:61] "kube-proxy-vx6bb" [7bff7041-dbf7-4143-9f70-52a12dd69f64] Running
	I0108 12:53:44.910696   10230 system_pods.go:61] "kube-scheduler-multinode-124908" [3dd0df78-6cad-4b47-a66f-74c412846b79] Running
	I0108 12:53:44.910701   10230 system_pods.go:61] "storage-provisioner" [6eda9f8e-814b-4a17-9ec8-89bd52973d7b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0108 12:53:44.910705   10230 system_pods.go:74] duration metric: took 186.921464ms to wait for pod list to return data ...
	I0108 12:53:44.910711   10230 default_sa.go:34] waiting for default service account to be created ...
	I0108 12:53:45.102385   10230 request.go:614] Waited for 191.618093ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/namespaces/default/serviceaccounts
	I0108 12:53:45.102509   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/default/serviceaccounts
	I0108 12:53:45.102517   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:45.102529   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:45.102539   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:45.106506   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:45.106521   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:45.106529   10230 round_trippers.go:580]     Audit-Id: 6cf08d88-678f-4306-abeb-5695da6ee543
	I0108 12:53:45.106543   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:45.106551   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:45.106557   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:45.106564   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:45.106570   10230 round_trippers.go:580]     Content-Length: 261
	I0108 12:53:45.106577   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:45 GMT
	I0108 12:53:45.106590   10230 request.go:1154] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"805"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"ef223f23-cc02-45b1-abac-dc1674e8bcea","resourceVersion":"324","creationTimestamp":"2023-01-08T20:49:48Z"}}]}
	I0108 12:53:45.106747   10230 default_sa.go:45] found service account: "default"
	I0108 12:53:45.106758   10230 default_sa.go:55] duration metric: took 196.044911ms for default service account to be created ...
	I0108 12:53:45.106765   10230 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 12:53:45.300775   10230 request.go:614] Waited for 193.937241ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods
	I0108 12:53:45.300834   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods
	I0108 12:53:45.300843   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:45.300856   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:45.300869   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:45.306088   10230 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 12:53:45.306101   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:45.306107   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:45 GMT
	I0108 12:53:45.306113   10230 round_trippers.go:580]     Audit-Id: 646c793d-91d1-4135-b873-475fe1917e32
	I0108 12:53:45.306146   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:45.306163   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:45.306173   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:45.306208   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:45.307549   10230 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"805"},"items":[{"metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"799","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84955 chars]
	I0108 12:53:45.309479   10230 system_pods.go:86] 12 kube-system pods found
	I0108 12:53:45.309489   10230 system_pods.go:89] "coredns-565d847f94-f6gqj" [1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7] Running
	I0108 12:53:45.309494   10230 system_pods.go:89] "etcd-multinode-124908" [9cf1a608-48d9-453e-bd35-263521e756e4] Running
	I0108 12:53:45.309498   10230 system_pods.go:89] "kindnet-4j92t" [2e0611f9-b324-4059-b858-ca1cc99bb8d9] Running
	I0108 12:53:45.309501   10230 system_pods.go:89] "kindnet-79h6s" [8899610c-9df6-488d-af2f-2848f1ce546b] Running
	I0108 12:53:45.309505   10230 system_pods.go:89] "kindnet-pj4l5" [82ac6efa-2268-472b-bd72-171778eabeb6] Running
	I0108 12:53:45.309509   10230 system_pods.go:89] "kube-apiserver-multinode-124908" [7e7e7fa5-c965-4737-83b1-afd48eb87547] Running
	I0108 12:53:45.309513   10230 system_pods.go:89] "kube-controller-manager-multinode-124908" [41ff8cf2-6b35-47c2-8f48-120e6adf98bb] Running
	I0108 12:53:45.309518   10230 system_pods.go:89] "kube-proxy-hq6ms" [3deaa832-bac0-47e3-bdef-482b094bf90f] Running
	I0108 12:53:45.309521   10230 system_pods.go:89] "kube-proxy-kzv6k" [05a4b261-aa83-4e23-83c6-0a50d659b5b7] Running
	I0108 12:53:45.309524   10230 system_pods.go:89] "kube-proxy-vx6bb" [7bff7041-dbf7-4143-9f70-52a12dd69f64] Running
	I0108 12:53:45.309531   10230 system_pods.go:89] "kube-scheduler-multinode-124908" [3dd0df78-6cad-4b47-a66f-74c412846b79] Running
	I0108 12:53:45.309537   10230 system_pods.go:89] "storage-provisioner" [6eda9f8e-814b-4a17-9ec8-89bd52973d7b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0108 12:53:45.309543   10230 system_pods.go:126] duration metric: took 202.776385ms to wait for k8s-apps to be running ...
	I0108 12:53:45.309548   10230 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 12:53:45.309615   10230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 12:53:45.319693   10230 system_svc.go:56] duration metric: took 10.140632ms WaitForService to wait for kubelet.
	I0108 12:53:45.319706   10230 kubeadm.go:573] duration metric: took 4.200401918s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 12:53:45.319724   10230 node_conditions.go:102] verifying NodePressure condition ...
	I0108 12:53:45.502379   10230 request.go:614] Waited for 182.601969ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/nodes
	I0108 12:53:45.502516   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes
	I0108 12:53:45.502527   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:45.502543   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:45.502555   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:45.507021   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:45.507035   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:45.507041   10230 round_trippers.go:580]     Audit-Id: 5c2ce207-cd5f-4861-9459-fff03ac1a13e
	I0108 12:53:45.507046   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:45.507051   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:45.507057   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:45.507073   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:45.507082   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:45 GMT
	I0108 12:53:45.507185   10230 request.go:1154] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"805"},"items":[{"metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 16137 chars]
	I0108 12:53:45.507611   10230 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0108 12:53:45.507619   10230 node_conditions.go:123] node cpu capacity is 6
	I0108 12:53:45.507633   10230 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0108 12:53:45.507637   10230 node_conditions.go:123] node cpu capacity is 6
	I0108 12:53:45.507640   10230 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0108 12:53:45.507644   10230 node_conditions.go:123] node cpu capacity is 6
	I0108 12:53:45.507647   10230 node_conditions.go:105] duration metric: took 187.921257ms to run NodePressure ...
	I0108 12:53:45.507654   10230 start.go:217] waiting for startup goroutines ...
	I0108 12:53:45.508317   10230 config.go:180] Loaded profile config "multinode-124908": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 12:53:45.508386   10230 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/config.json ...
	I0108 12:53:45.529445   10230 out.go:177] * Starting worker node multinode-124908-m02 in cluster multinode-124908
	I0108 12:53:45.573063   10230 cache.go:120] Beginning downloading kic base image for docker with docker
	I0108 12:53:45.594442   10230 out.go:177] * Pulling base image ...
	I0108 12:53:45.637218   10230 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0108 12:53:45.637228   10230 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 12:53:45.637265   10230 cache.go:57] Caching tarball of preloaded images
	I0108 12:53:45.637476   10230 preload.go:174] Found /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 12:53:45.637499   10230 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I0108 12:53:45.638547   10230 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/config.json ...
	I0108 12:53:45.695672   10230 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 12:53:45.695688   10230 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 12:53:45.695706   10230 cache.go:193] Successfully downloaded all kic artifacts
	I0108 12:53:45.695737   10230 start.go:364] acquiring machines lock for multinode-124908-m02: {Name:mk32c9261441e7ef10a9285ab8073f1064c4c4e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 12:53:45.695825   10230 start.go:368] acquired machines lock for "multinode-124908-m02" in 76.422µs
	I0108 12:53:45.695846   10230 start.go:96] Skipping create...Using existing machine configuration
	I0108 12:53:45.695852   10230 fix.go:55] fixHost starting: m02
	I0108 12:53:45.696139   10230 cli_runner.go:164] Run: docker container inspect multinode-124908-m02 --format={{.State.Status}}
	I0108 12:53:45.752049   10230 fix.go:103] recreateIfNeeded on multinode-124908-m02: state=Stopped err=<nil>
	W0108 12:53:45.752073   10230 fix.go:129] unexpected machine state, will restart: <nil>
	I0108 12:53:45.773894   10230 out.go:177] * Restarting existing docker container for "multinode-124908-m02" ...
	I0108 12:53:45.815868   10230 cli_runner.go:164] Run: docker start multinode-124908-m02
	I0108 12:53:46.148768   10230 cli_runner.go:164] Run: docker container inspect multinode-124908-m02 --format={{.State.Status}}
	I0108 12:53:46.212326   10230 kic.go:415] container "multinode-124908-m02" state is running.
	I0108 12:53:46.212952   10230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-124908-m02
	I0108 12:53:46.277649   10230 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/config.json ...
	I0108 12:53:46.278157   10230 machine.go:88] provisioning docker machine ...
	I0108 12:53:46.278171   10230 ubuntu.go:169] provisioning hostname "multinode-124908-m02"
	I0108 12:53:46.278264   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908-m02
	I0108 12:53:46.353849   10230 main.go:134] libmachine: Using SSH client type: native
	I0108 12:53:46.354038   10230 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51429 <nil> <nil>}
	I0108 12:53:46.354048   10230 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-124908-m02 && echo "multinode-124908-m02" | sudo tee /etc/hostname
	I0108 12:53:46.547570   10230 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-124908-m02
	
	I0108 12:53:46.547681   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908-m02
	I0108 12:53:46.616784   10230 main.go:134] libmachine: Using SSH client type: native
	I0108 12:53:46.616961   10230 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51429 <nil> <nil>}
	I0108 12:53:46.616976   10230 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-124908-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-124908-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-124908-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 12:53:46.739799   10230 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 12:53:46.739816   10230 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-2761/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-2761/.minikube}
	I0108 12:53:46.739829   10230 ubuntu.go:177] setting up certificates
	I0108 12:53:46.739840   10230 provision.go:83] configureAuth start
	I0108 12:53:46.739937   10230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-124908-m02
	I0108 12:53:46.805029   10230 provision.go:138] copyHostCerts
	I0108 12:53:46.805084   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem
	I0108 12:53:46.805143   10230 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem, removing ...
	I0108 12:53:46.805149   10230 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem
	I0108 12:53:46.805257   10230 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem (1082 bytes)
	I0108 12:53:46.805418   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem
	I0108 12:53:46.805463   10230 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem, removing ...
	I0108 12:53:46.805468   10230 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem
	I0108 12:53:46.805537   10230 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem (1123 bytes)
	I0108 12:53:46.805666   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem
	I0108 12:53:46.805703   10230 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem, removing ...
	I0108 12:53:46.805707   10230 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem
	I0108 12:53:46.805770   10230 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem (1675 bytes)
	I0108 12:53:46.805897   10230 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem org=jenkins.multinode-124908-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-124908-m02]
	I0108 12:53:46.916825   10230 provision.go:172] copyRemoteCerts
	I0108 12:53:46.916904   10230 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 12:53:46.916975   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908-m02
	I0108 12:53:46.979531   10230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51429 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/multinode-124908-m02/id_rsa Username:docker}
	I0108 12:53:47.085539   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 12:53:47.085649   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 12:53:47.150159   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 12:53:47.150271   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0108 12:53:47.168673   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 12:53:47.168767   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 12:53:47.186170   10230 provision.go:86] duration metric: configureAuth took 446.326038ms
	I0108 12:53:47.186183   10230 ubuntu.go:193] setting minikube options for container-runtime
	I0108 12:53:47.186378   10230 config.go:180] Loaded profile config "multinode-124908": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 12:53:47.186454   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908-m02
	I0108 12:53:47.246101   10230 main.go:134] libmachine: Using SSH client type: native
	I0108 12:53:47.246261   10230 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51429 <nil> <nil>}
	I0108 12:53:47.246270   10230 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0108 12:53:47.361836   10230 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0108 12:53:47.361853   10230 ubuntu.go:71] root file system type: overlay
	I0108 12:53:47.362094   10230 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0108 12:53:47.362175   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908-m02
	I0108 12:53:47.420915   10230 main.go:134] libmachine: Using SSH client type: native
	I0108 12:53:47.421070   10230 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51429 <nil> <nil>}
	I0108 12:53:47.421118   10230 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0108 12:53:47.546310   10230 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0108 12:53:47.546417   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908-m02
	I0108 12:53:47.605886   10230 main.go:134] libmachine: Using SSH client type: native
	I0108 12:53:47.606037   10230 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51429 <nil> <nil>}
	I0108 12:53:47.606050   10230 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0108 12:53:47.729868   10230 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 12:53:47.729893   10230 machine.go:91] provisioned docker machine in 1.451746966s
	I0108 12:53:47.729901   10230 start.go:300] post-start starting for "multinode-124908-m02" (driver="docker")
	I0108 12:53:47.729908   10230 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 12:53:47.730011   10230 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 12:53:47.730086   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908-m02
	I0108 12:53:47.787960   10230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51429 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/multinode-124908-m02/id_rsa Username:docker}
	I0108 12:53:47.874997   10230 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 12:53:47.878585   10230 command_runner.go:130] > NAME="Ubuntu"
	I0108 12:53:47.878594   10230 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0108 12:53:47.878599   10230 command_runner.go:130] > ID=ubuntu
	I0108 12:53:47.878602   10230 command_runner.go:130] > ID_LIKE=debian
	I0108 12:53:47.878607   10230 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0108 12:53:47.878611   10230 command_runner.go:130] > VERSION_ID="20.04"
	I0108 12:53:47.878615   10230 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0108 12:53:47.878620   10230 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0108 12:53:47.878624   10230 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0108 12:53:47.878631   10230 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0108 12:53:47.878638   10230 command_runner.go:130] > VERSION_CODENAME=focal
	I0108 12:53:47.878642   10230 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0108 12:53:47.878688   10230 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 12:53:47.878701   10230 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 12:53:47.878708   10230 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 12:53:47.878713   10230 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 12:53:47.878718   10230 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2761/.minikube/addons for local assets ...
	I0108 12:53:47.878810   10230 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2761/.minikube/files for local assets ...
	I0108 12:53:47.878967   10230 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem -> 40832.pem in /etc/ssl/certs
	I0108 12:53:47.878973   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem -> /etc/ssl/certs/40832.pem
	I0108 12:53:47.879172   10230 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 12:53:47.886549   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem --> /etc/ssl/certs/40832.pem (1708 bytes)
	I0108 12:53:47.903518   10230 start.go:303] post-start completed in 173.609056ms
	I0108 12:53:47.903608   10230 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 12:53:47.903678   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908-m02
	I0108 12:53:47.961498   10230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51429 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/multinode-124908-m02/id_rsa Username:docker}
	I0108 12:53:48.044531   10230 command_runner.go:130] > 12%
	I0108 12:53:48.044617   10230 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 12:53:48.049126   10230 command_runner.go:130] > 49G
	I0108 12:53:48.049444   10230 fix.go:57] fixHost completed within 2.353620004s
	I0108 12:53:48.049455   10230 start.go:83] releasing machines lock for "multinode-124908-m02", held for 2.35365316s
	I0108 12:53:48.049565   10230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-124908-m02
	I0108 12:53:48.130822   10230 out.go:177] * Found network options:
	I0108 12:53:48.153112   10230 out.go:177]   - NO_PROXY=192.168.58.2
	W0108 12:53:48.174750   10230 proxy.go:119] fail to check proxy env: Error ip not in block
	W0108 12:53:48.174813   10230 proxy.go:119] fail to check proxy env: Error ip not in block
	I0108 12:53:48.174918   10230 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 12:53:48.174928   10230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0108 12:53:48.174998   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908-m02
	I0108 12:53:48.175000   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908-m02
	I0108 12:53:48.237628   10230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51429 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/multinode-124908-m02/id_rsa Username:docker}
	I0108 12:53:48.237818   10230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51429 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/multinode-124908-m02/id_rsa Username:docker}
	I0108 12:53:48.376909   10230 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0108 12:53:48.378363   10230 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I0108 12:53:48.391852   10230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 12:53:48.462953   10230 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0108 12:53:48.555269   10230 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0108 12:53:48.565129   10230 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0108 12:53:48.565238   10230 command_runner.go:130] > [Unit]
	I0108 12:53:48.565249   10230 command_runner.go:130] > Description=Docker Application Container Engine
	I0108 12:53:48.565254   10230 command_runner.go:130] > Documentation=https://docs.docker.com
	I0108 12:53:48.565259   10230 command_runner.go:130] > BindsTo=containerd.service
	I0108 12:53:48.565266   10230 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0108 12:53:48.565273   10230 command_runner.go:130] > Wants=network-online.target
	I0108 12:53:48.565283   10230 command_runner.go:130] > Requires=docker.socket
	I0108 12:53:48.565290   10230 command_runner.go:130] > StartLimitBurst=3
	I0108 12:53:48.565298   10230 command_runner.go:130] > StartLimitIntervalSec=60
	I0108 12:53:48.565306   10230 command_runner.go:130] > [Service]
	I0108 12:53:48.565312   10230 command_runner.go:130] > Type=notify
	I0108 12:53:48.565321   10230 command_runner.go:130] > Restart=on-failure
	I0108 12:53:48.565327   10230 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I0108 12:53:48.565334   10230 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0108 12:53:48.565349   10230 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0108 12:53:48.565357   10230 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0108 12:53:48.565362   10230 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0108 12:53:48.565369   10230 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0108 12:53:48.565375   10230 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0108 12:53:48.565382   10230 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0108 12:53:48.565397   10230 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0108 12:53:48.565403   10230 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0108 12:53:48.565407   10230 command_runner.go:130] > ExecStart=
	I0108 12:53:48.565418   10230 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0108 12:53:48.565423   10230 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0108 12:53:48.565434   10230 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0108 12:53:48.565439   10230 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0108 12:53:48.565443   10230 command_runner.go:130] > LimitNOFILE=infinity
	I0108 12:53:48.565447   10230 command_runner.go:130] > LimitNPROC=infinity
	I0108 12:53:48.565450   10230 command_runner.go:130] > LimitCORE=infinity
	I0108 12:53:48.565455   10230 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0108 12:53:48.565460   10230 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0108 12:53:48.565463   10230 command_runner.go:130] > TasksMax=infinity
	I0108 12:53:48.565467   10230 command_runner.go:130] > TimeoutStartSec=0
	I0108 12:53:48.565472   10230 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0108 12:53:48.565476   10230 command_runner.go:130] > Delegate=yes
	I0108 12:53:48.565487   10230 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0108 12:53:48.565491   10230 command_runner.go:130] > KillMode=process
	I0108 12:53:48.565495   10230 command_runner.go:130] > [Install]
	I0108 12:53:48.565499   10230 command_runner.go:130] > WantedBy=multi-user.target
	I0108 12:53:48.566000   10230 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0108 12:53:48.566088   10230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 12:53:48.575878   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 12:53:48.590192   10230 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0108 12:53:48.590207   10230 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0108 12:53:48.591191   10230 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0108 12:53:48.658100   10230 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0108 12:53:48.731370   10230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 12:53:48.803470   10230 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 12:53:49.030629   10230 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0108 12:53:49.104116   10230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 12:53:49.181921   10230 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0108 12:53:49.191913   10230 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0108 12:53:49.191998   10230 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0108 12:53:49.195981   10230 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0108 12:53:49.195992   10230 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0108 12:53:49.196001   10230 command_runner.go:130] > Device: 10002eh/1048622d	Inode: 131         Links: 1
	I0108 12:53:49.196008   10230 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0108 12:53:49.196014   10230 command_runner.go:130] > Access: 2023-01-08 20:53:48.575911189 +0000
	I0108 12:53:49.196019   10230 command_runner.go:130] > Modify: 2023-01-08 20:53:48.474911183 +0000
	I0108 12:53:49.196026   10230 command_runner.go:130] > Change: 2023-01-08 20:53:48.482911184 +0000
	I0108 12:53:49.196030   10230 command_runner.go:130] >  Birth: -
	I0108 12:53:49.196052   10230 start.go:472] Will wait 60s for crictl version
	I0108 12:53:49.196103   10230 ssh_runner.go:195] Run: sudo crictl version
	I0108 12:53:49.223964   10230 command_runner.go:130] > Version:  0.1.0
	I0108 12:53:49.223977   10230 command_runner.go:130] > RuntimeName:  docker
	I0108 12:53:49.223994   10230 command_runner.go:130] > RuntimeVersion:  20.10.21
	I0108 12:53:49.224000   10230 command_runner.go:130] > RuntimeApiVersion:  1.41.0
	I0108 12:53:49.225763   10230 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.21
	RuntimeApiVersion:  1.41.0
	I0108 12:53:49.225851   10230 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 12:53:49.254079   10230 command_runner.go:130] > 20.10.21
	I0108 12:53:49.256407   10230 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 12:53:49.283828   10230 command_runner.go:130] > 20.10.21
	I0108 12:53:49.330593   10230 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
	I0108 12:53:49.352743   10230 out.go:177]   - env NO_PROXY=192.168.58.2
	I0108 12:53:49.374844   10230 cli_runner.go:164] Run: docker exec -t multinode-124908-m02 dig +short host.docker.internal
	I0108 12:53:49.481982   10230 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0108 12:53:49.482100   10230 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0108 12:53:49.486667   10230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 12:53:49.496963   10230 certs.go:54] Setting up /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908 for IP: 192.168.58.3
	I0108 12:53:49.497104   10230 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.key
	I0108 12:53:49.497166   10230 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.key
	I0108 12:53:49.497174   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 12:53:49.497203   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 12:53:49.497232   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 12:53:49.497253   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 12:53:49.497356   10230 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083.pem (1338 bytes)
	W0108 12:53:49.497397   10230 certs.go:384] ignoring /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083_empty.pem, impossibly tiny 0 bytes
	I0108 12:53:49.497409   10230 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 12:53:49.497451   10230 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem (1082 bytes)
	I0108 12:53:49.497507   10230 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem (1123 bytes)
	I0108 12:53:49.497544   10230 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem (1675 bytes)
	I0108 12:53:49.497620   10230 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem (1708 bytes)
	I0108 12:53:49.497661   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 12:53:49.497684   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083.pem -> /usr/share/ca-certificates/4083.pem
	I0108 12:53:49.497708   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem -> /usr/share/ca-certificates/40832.pem
	I0108 12:53:49.498056   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 12:53:49.516005   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 12:53:49.533665   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 12:53:49.551119   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 12:53:49.569154   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 12:53:49.587046   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083.pem --> /usr/share/ca-certificates/4083.pem (1338 bytes)
	I0108 12:53:49.604598   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem --> /usr/share/ca-certificates/40832.pem (1708 bytes)
	I0108 12:53:49.621738   10230 ssh_runner.go:195] Run: openssl version
	I0108 12:53:49.626940   10230 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0108 12:53:49.627225   10230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/40832.pem && ln -fs /usr/share/ca-certificates/40832.pem /etc/ssl/certs/40832.pem"
	I0108 12:53:49.635035   10230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/40832.pem
	I0108 12:53:49.638742   10230 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 20:32 /usr/share/ca-certificates/40832.pem
	I0108 12:53:49.638884   10230 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:32 /usr/share/ca-certificates/40832.pem
	I0108 12:53:49.638955   10230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/40832.pem
	I0108 12:53:49.644258   10230 command_runner.go:130] > 3ec20f2e
	I0108 12:53:49.644643   10230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/40832.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 12:53:49.652174   10230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 12:53:49.660230   10230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 12:53:49.664205   10230 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 20:27 /usr/share/ca-certificates/minikubeCA.pem
	I0108 12:53:49.664296   10230 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:27 /usr/share/ca-certificates/minikubeCA.pem
	I0108 12:53:49.664351   10230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 12:53:49.669882   10230 command_runner.go:130] > b5213941
	I0108 12:53:49.669944   10230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 12:53:49.677564   10230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4083.pem && ln -fs /usr/share/ca-certificates/4083.pem /etc/ssl/certs/4083.pem"
	I0108 12:53:49.685526   10230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4083.pem
	I0108 12:53:49.689557   10230 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 20:32 /usr/share/ca-certificates/4083.pem
	I0108 12:53:49.689677   10230 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:32 /usr/share/ca-certificates/4083.pem
	I0108 12:53:49.689728   10230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4083.pem
	I0108 12:53:49.694870   10230 command_runner.go:130] > 51391683
	I0108 12:53:49.695092   10230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4083.pem /etc/ssl/certs/51391683.0"
	I0108 12:53:49.703101   10230 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0108 12:53:49.770041   10230 command_runner.go:130] > systemd
	I0108 12:53:49.772286   10230 cni.go:95] Creating CNI manager for ""
	I0108 12:53:49.772301   10230 cni.go:156] 3 nodes found, recommending kindnet
	I0108 12:53:49.772315   10230 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 12:53:49.772334   10230 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-124908 NodeName:multinode-124908-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 12:53:49.772437   10230 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-124908-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 12:53:49.772490   10230 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-124908-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:multinode-124908 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 12:53:49.772562   10230 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0108 12:53:49.780011   10230 command_runner.go:130] > kubeadm
	I0108 12:53:49.780021   10230 command_runner.go:130] > kubectl
	I0108 12:53:49.780027   10230 command_runner.go:130] > kubelet
	I0108 12:53:49.780929   10230 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 12:53:49.780997   10230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0108 12:53:49.788388   10230 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (482 bytes)
	I0108 12:53:49.801312   10230 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 12:53:49.814916   10230 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0108 12:53:49.818820   10230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 12:53:49.828861   10230 host.go:66] Checking if "multinode-124908" exists ...
	I0108 12:53:49.829061   10230 config.go:180] Loaded profile config "multinode-124908": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 12:53:49.829055   10230 start.go:286] JoinCluster: &{Name:multinode-124908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-124908 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false
metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 12:53:49.829124   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0108 12:53:49.829194   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908
	I0108 12:53:49.889142   10230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51400 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/multinode-124908/id_rsa Username:docker}
	I0108 12:53:50.040274   10230 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f 
	I0108 12:53:50.040306   10230 start.go:299] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0108 12:53:50.040325   10230 host.go:66] Checking if "multinode-124908" exists ...
	I0108 12:53:50.040571   10230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl drain multinode-124908-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0108 12:53:50.040633   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908
	I0108 12:53:50.100892   10230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51400 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/multinode-124908/id_rsa Username:docker}
	I0108 12:53:50.225056   10230 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0108 12:53:50.250777   10230 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-4j92t, kube-system/kube-proxy-vx6bb
	I0108 12:53:53.263004   10230 command_runner.go:130] > node/multinode-124908-m02 cordoned
	I0108 12:53:53.263020   10230 command_runner.go:130] > pod "busybox-65db55d5d6-k6vhx" has DeletionTimestamp older than 1 seconds, skipping
	I0108 12:53:53.263026   10230 command_runner.go:130] > node/multinode-124908-m02 drained
	I0108 12:53:53.263043   10230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl drain multinode-124908-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.222497044s)
	I0108 12:53:53.263052   10230 node.go:109] successfully drained node "m02"
	I0108 12:53:53.263384   10230 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15565-2761/kubeconfig
	I0108 12:53:53.263599   10230 kapi.go:59] client config for multinode-124908: &rest.Config{Host:"https://127.0.0.1:51399", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 12:53:53.263875   10230 request.go:1154] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0108 12:53:53.263902   10230 round_trippers.go:463] DELETE https://127.0.0.1:51399/api/v1/nodes/multinode-124908-m02
	I0108 12:53:53.263905   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:53.263912   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:53.263918   10230 round_trippers.go:473]     Content-Type: application/json
	I0108 12:53:53.263923   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:53.267249   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:53.267263   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:53.267270   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:53.267275   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:53.267279   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:53.267284   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:53.267288   10230 round_trippers.go:580]     Content-Length: 171
	I0108 12:53:53.267294   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:53 GMT
	I0108 12:53:53.267299   10230 round_trippers.go:580]     Audit-Id: 7c0acc42-7798-47f5-8d1d-5c238749fb6c
	I0108 12:53:53.267311   10230 request.go:1154] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-124908-m02","kind":"nodes","uid":"06778a45-7a2c-401b-918a-d4864150c87c"}}
	I0108 12:53:53.267342   10230 node.go:125] successfully deleted node "m02"
	I0108 12:53:53.267351   10230 start.go:303] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0108 12:53:53.267363   10230 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0108 12:53:53.267377   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02"
	I0108 12:53:53.338646   10230 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 12:53:53.449568   10230 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0108 12:53:53.449587   10230 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0108 12:53:53.467352   10230 command_runner.go:130] ! W0108 20:53:53.338191    1110 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0108 12:53:53.467368   10230 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0108 12:53:53.467389   10230 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0108 12:53:53.467396   10230 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0108 12:53:53.467402   10230 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0108 12:53:53.467412   10230 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0108 12:53:53.467424   10230 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0108 12:53:53.467451   10230 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0108 12:53:53.467481   10230 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0108 20:53:53.338191    1110 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0108 12:53:53.467490   10230 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I0108 12:53:53.467501   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I0108 12:53:53.509384   10230 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0108 12:53:53.509408   10230 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0108 12:53:53.509430   10230 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0108 12:53:53.509453   10230 retry.go:31] will retry after 11.04660288s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0108 20:53:53.338191    1110 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0108 12:54:04.556072   10230 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0108 12:54:04.556131   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02"
	I0108 12:54:04.594753   10230 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 12:54:04.694438   10230 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0108 12:54:04.694468   10230 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0108 12:54:04.710963   10230 command_runner.go:130] ! W0108 20:54:04.594092    1653 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0108 12:54:04.710979   10230 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0108 12:54:04.710986   10230 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0108 12:54:04.710992   10230 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0108 12:54:04.710998   10230 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0108 12:54:04.711004   10230 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0108 12:54:04.711013   10230 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0108 12:54:04.711025   10230 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0108 12:54:04.711053   10230 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0108 20:54:04.594092    1653 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0108 12:54:04.711062   10230 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I0108 12:54:04.711070   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I0108 12:54:04.750797   10230 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0108 12:54:04.750812   10230 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0108 12:54:04.750827   10230 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0108 12:54:04.750837   10230 retry.go:31] will retry after 21.607636321s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0108 20:54:04.594092    1653 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0108 12:54:26.358759   10230 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0108 12:54:26.358891   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02"
	I0108 12:54:26.398634   10230 command_runner.go:130] ! W0108 20:54:26.398222    1875 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0108 12:54:26.398654   10230 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0108 12:54:26.421450   10230 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0108 12:54:26.426704   10230 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0108 12:54:26.490593   10230 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0108 12:54:26.490608   10230 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0108 12:54:26.516291   10230 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0108 12:54:26.516305   10230 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0108 12:54:26.519234   10230 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 12:54:26.519247   10230 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0108 12:54:26.519254   10230 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0108 12:54:26.519284   10230 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0108 20:54:26.398222    1875 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0108 12:54:26.519293   10230 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I0108 12:54:26.519300   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I0108 12:54:26.558560   10230 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0108 12:54:26.558574   10230 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0108 12:54:26.558592   10230 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0108 12:54:26.558602   10230 retry.go:31] will retry after 26.202601198s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0108 20:54:26.398222    1875 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0108 12:54:52.760950   10230 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0108 12:54:52.761003   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02"
	I0108 12:54:52.797460   10230 command_runner.go:130] ! W0108 20:54:52.797019    2131 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0108 12:54:52.797474   10230 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0108 12:54:52.820523   10230 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0108 12:54:52.825992   10230 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0108 12:54:52.887640   10230 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0108 12:54:52.887658   10230 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0108 12:54:52.913384   10230 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0108 12:54:52.913402   10230 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0108 12:54:52.916556   10230 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 12:54:52.916569   10230 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0108 12:54:52.916576   10230 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0108 12:54:52.916621   10230 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0108 20:54:52.797019    2131 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0108 12:54:52.916639   10230 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I0108 12:54:52.916655   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I0108 12:54:52.956378   10230 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0108 12:54:52.956394   10230 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0108 12:54:52.956416   10230 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0108 12:54:52.956429   10230 retry.go:31] will retry after 31.647853817s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0108 20:54:52.797019    2131 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0108 12:55:24.605996   10230 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0108 12:55:24.606100   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02"
	I0108 12:55:24.645169   10230 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 12:55:24.744513   10230 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0108 12:55:24.744528   10230 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0108 12:55:24.761783   10230 command_runner.go:130] ! W0108 20:55:24.644734    2441 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0108 12:55:24.761803   10230 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0108 12:55:24.761811   10230 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0108 12:55:24.761824   10230 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0108 12:55:24.761830   10230 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0108 12:55:24.761835   10230 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0108 12:55:24.761844   10230 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0108 12:55:24.761850   10230 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0108 12:55:24.761882   10230 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0108 20:55:24.644734    2441 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0108 12:55:24.761890   10230 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I0108 12:55:24.761898   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I0108 12:55:24.800820   10230 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0108 12:55:24.800837   10230 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0108 12:55:24.800857   10230 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0108 12:55:24.800869   10230 retry.go:31] will retry after 46.809773289s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0108 20:55:24.644734    2441 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0108 12:56:11.610993   10230 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0108 12:56:11.611064   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02"
	I0108 12:56:11.650964   10230 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 12:56:11.755239   10230 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0108 12:56:11.755258   10230 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0108 12:56:11.772411   10230 command_runner.go:130] ! W0108 20:56:11.649972    2847 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0108 12:56:11.772426   10230 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0108 12:56:11.772440   10230 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0108 12:56:11.772445   10230 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0108 12:56:11.772450   10230 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0108 12:56:11.772458   10230 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0108 12:56:11.772467   10230 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0108 12:56:11.772472   10230 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0108 12:56:11.772512   10230 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0108 20:56:11.649972    2847 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0108 12:56:11.772523   10230 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I0108 12:56:11.772535   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I0108 12:56:11.812070   10230 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0108 12:56:11.812091   10230 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0108 12:56:11.812115   10230 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0108 12:56:11.812133   10230 start.go:288] JoinCluster complete in 2m21.984909549s
	I0108 12:56:11.834139   10230 out.go:177] 
	W0108 12:56:11.855243   10230 out.go:239] X Exiting due to GUEST_START: adding node: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0108 20:56:11.649972    2847 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: adding node: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0108 20:56:11.649972    2847 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	W0108 12:56:11.855276   10230 out.go:239] * 
	* 
	W0108 12:56:11.856518   10230 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 12:56:11.919021   10230 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:295: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-124908" : exit status 80
multinode_test.go:298: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-124908
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-124908
helpers_test.go:235: (dbg) docker inspect multinode-124908:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f497746183ef6a8daaba6105b70c29cf942f5a286bffdfbef1e61a32fa568e7e",
	        "Created": "2023-01-08T20:49:16.62045435Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 91291,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-08T20:52:49.724029724Z",
	            "FinishedAt": "2023-01-08T20:52:23.887522269Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/f497746183ef6a8daaba6105b70c29cf942f5a286bffdfbef1e61a32fa568e7e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f497746183ef6a8daaba6105b70c29cf942f5a286bffdfbef1e61a32fa568e7e/hostname",
	        "HostsPath": "/var/lib/docker/containers/f497746183ef6a8daaba6105b70c29cf942f5a286bffdfbef1e61a32fa568e7e/hosts",
	        "LogPath": "/var/lib/docker/containers/f497746183ef6a8daaba6105b70c29cf942f5a286bffdfbef1e61a32fa568e7e/f497746183ef6a8daaba6105b70c29cf942f5a286bffdfbef1e61a32fa568e7e-json.log",
	        "Name": "/multinode-124908",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-124908:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-124908",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5088e4d0462d007a6716b3d0adc6e1ec4cc2c246cd4517801f7fe85181426636-init/diff:/var/lib/docker/overlay2/cf478f0005761c12f45c53e8731191461bd51878189b802beb3f80527bc3582c/diff:/var/lib/docker/overlay2/50547848ed232979e0349fdf0641681247e43e6ddcd120dbefccdce45eba4793/diff:/var/lib/docker/overlay2/7a8415f97e49b013d35a8b27eaf2a6be470c2a985fcd6de4711cb0018f555a3d/diff:/var/lib/docker/overlay2/435dd0b905de8bd2d6b23782418e6d76b0957f55123fe106e3b62d08c0f3da13/diff:/var/lib/docker/overlay2/70ca2e846954d00d296abfcdcefb0db4959d8ce6650e54b1071b655f7c71c823/diff:/var/lib/docker/overlay2/62715d50ae74531df8ef33be95bc933c79334fbfa0ace0bad5efc678fb43d860/diff:/var/lib/docker/overlay2/857f757c27b37807332ef8a52061b2e02614567dadd8631c9414bcf1e51c7eb6/diff:/var/lib/docker/overlay2/d3d508987063e3e43530c93ff3bb9fc842f7f56e79f9babdb9a3608990dc911e/diff:/var/lib/docker/overlay2/b9307635c9b780f8ea6af04393e82329578be8ced22abd92633ac5912ce752d7/diff:/var/lib/docker/overlay2/ab3124
e34a60bd3d2f554d712f9db28fed57b9030105f996b2a40b6c5c68e6a3/diff:/var/lib/docker/overlay2/2664538922f7cea7eec3238db144935f7380d439e3aaf6611f7f6232515b6c70/diff:/var/lib/docker/overlay2/fcf4ff3c9f738d263ccde0d59a8f0bbbf77d5fe10a37a0b64782c90258c52f05/diff:/var/lib/docker/overlay2/9ebb5fb88ffad88aca62110ea1902a046eb8d27eab4d1b03380f2799a61190e4/diff:/var/lib/docker/overlay2/16c6977d1dcb3aef6968fa378be9d39da565962707fb1c2ebcc08741b3ebabb0/diff:/var/lib/docker/overlay2/4a1a615ba2290b96a2289b3709f9e4e2b7585a7880463549ed90c765c1cf364b/diff:/var/lib/docker/overlay2/8875d4ae4e008b8ed7a6c64b581bc9a7437e20bc59a10db038658c3c3abbd626/diff:/var/lib/docker/overlay2/a92bc2bed5e566a6a12e091f0b6adcc5120ec1a5a04a079614da38b8e08b4f4d/diff:/var/lib/docker/overlay2/507f4a1c4f60a4445244bd4611fbdebeda31c842886f650aff0c93fe1cbf551b/diff:/var/lib/docker/overlay2/4b6f57707d2af391e02b8fbab74a152c38778d850194db7c366c972d607c3683/diff:/var/lib/docker/overlay2/30f07cc70078d1a1064ae4c014017806ca9cab561445ba4999d279d77ab9efd9/diff:/var/lib/d
ocker/overlay2/a7ce66498ad28650a9c447ffdd1776688091a1f96a77ba104690bbd632828084/diff:/var/lib/docker/overlay2/375e879a1c9abf773aadafa9214b4cd6a5fa848c3521ded951069c1ef16d03c8/diff:/var/lib/docker/overlay2/dbf6bd39c4440680d1fb7dcfc66134acd119d818a0da224feea03b15985518ef/diff:/var/lib/docker/overlay2/f5247f50460095d94d94f10c8f29a1106915f3f694a40dbc0ff0a7494ceef2d6/diff:/var/lib/docker/overlay2/eca77ea4b87f19d3e4b6258b307c944a60d8a11e38e520715736d86cfcb0a340/diff:/var/lib/docker/overlay2/af8edadcadb813c9b8bcb395db5b7025128f75336edf043daf159e86115fa2d0/diff:/var/lib/docker/overlay2/82696f404a416ef0c49184f767d3a67d76997ca4b3ab9f2553ab364b9e902189/diff:/var/lib/docker/overlay2/aa5f3a92ab78aa13af6b0e4ca676e887e32b388ad037098956622b2bb2d64653/diff:/var/lib/docker/overlay2/3fd93bd37311284bcd588f06d2e1157fcae183e793e58b9e91af55526752251b/diff:/var/lib/docker/overlay2/5cac080397d4de235a72e46ee68fdd622d9fba1dbd60139a59881df7cb97cdd3/diff:/var/lib/docker/overlay2/1534f7a89f3f0459a57d2264ddb9c4b2e95b9348c6c3fb6839c3f2cd1aa
7009a/diff:/var/lib/docker/overlay2/0fa983ab9147631e9188574a597cbb1ada8bd69b4eff49391c9704d239988f73/diff:/var/lib/docker/overlay2/2ff1f973faf98b7d46648d22c4c0cb73675d5b3f37e6906c457a45823a29fe1e/diff:/var/lib/docker/overlay2/1d56ab53b6c377c5835e50d09effb1a1a727279cb8883e5d4cda8c35b4600695/diff:/var/lib/docker/overlay2/903da5933dc4be1a0f9e38defe40072a669562fc25c401b8b9a02def3b94bec6/diff:/var/lib/docker/overlay2/4be7777ae41ce96ae10877862b8954fa1ee593061f9647f30de2ccdd036bb452/diff:/var/lib/docker/overlay2/ae284268a6cd8a67190129d99bdb6a97d27c88bfe4536cbdf20bc356c6cb5ad4/diff:/var/lib/docker/overlay2/207f47b4e74ecca6010612742ebe5cd0c8363dd1634d58f37b9df57cefc063f2/diff:/var/lib/docker/overlay2/65d59701773a038dc5533dece8ebc52ebf3efc833e94c91c470d1f6593bdf196/diff:/var/lib/docker/overlay2/3ae8859886568a0e539b79f17ace58f390ab402b4428c45188c2587640d73f10/diff:/var/lib/docker/overlay2/bf63d45714e6f77ee9a5cf0fd198e479af953d7ea25a6f1f76633e63bd9b827f/diff:/var/lib/docker/overlay2/ac8c76daac6f3c2d9c8ceee7ed9defe04f1a31
f0271684f4258c0f634ed1fce1/diff:/var/lib/docker/overlay2/1cd45a0f7910466989a7434f8eec249f0e295b686baad0e434a2d34dd6e82a47/diff:/var/lib/docker/overlay2/d72980245e92027e64b68ee0fc086b48f102ea405ffbebfd8220036fdbe805d6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5088e4d0462d007a6716b3d0adc6e1ec4cc2c246cd4517801f7fe85181426636/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5088e4d0462d007a6716b3d0adc6e1ec4cc2c246cd4517801f7fe85181426636/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5088e4d0462d007a6716b3d0adc6e1ec4cc2c246cd4517801f7fe85181426636/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-124908",
	                "Source": "/var/lib/docker/volumes/multinode-124908/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-124908",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-124908",
	                "name.minikube.sigs.k8s.io": "multinode-124908",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4d570cba44c09366fa007182f51bee8c8dae9435efc123f0ba139e7fe7f1ce6d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51400"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51401"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51402"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51403"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51399"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4d570cba44c0",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-124908": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f497746183ef",
	                        "multinode-124908"
	                    ],
	                    "NetworkID": "07f098ebf242888daad0efdcf0937a44b9c0a6b5029aec12710a53f1180bf4ff",
	                    "EndpointID": "7100818da6dad383a43574cbfdc8e1652e72e310e46f20016b725d3e006b0e36",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-124908 -n multinode-124908
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-124908 logs -n 25: (3.404152006s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                                            Args                                                             |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-124908 ssh -n                                                                                                     | multinode-124908 | jenkins | v1.28.0 | 08 Jan 23 12:51 PST | 08 Jan 23 12:51 PST |
	|         | multinode-124908-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-124908 cp multinode-124908-m02:/home/docker/cp-test.txt                                                           | multinode-124908 | jenkins | v1.28.0 | 08 Jan 23 12:51 PST | 08 Jan 23 12:51 PST |
	|         | /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile2803111938/001/cp-test_multinode-124908-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-124908 ssh -n                                                                                                     | multinode-124908 | jenkins | v1.28.0 | 08 Jan 23 12:51 PST | 08 Jan 23 12:51 PST |
	|         | multinode-124908-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-124908 cp multinode-124908-m02:/home/docker/cp-test.txt                                                           | multinode-124908 | jenkins | v1.28.0 | 08 Jan 23 12:51 PST | 08 Jan 23 12:51 PST |
	|         | multinode-124908:/home/docker/cp-test_multinode-124908-m02_multinode-124908.txt                                             |                  |         |         |                     |                     |
	| ssh     | multinode-124908 ssh -n                                                                                                     | multinode-124908 | jenkins | v1.28.0 | 08 Jan 23 12:51 PST | 08 Jan 23 12:51 PST |
	|         | multinode-124908-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-124908 ssh -n multinode-124908 sudo cat                                                                           | multinode-124908 | jenkins | v1.28.0 | 08 Jan 23 12:51 PST | 08 Jan 23 12:51 PST |
	|         | /home/docker/cp-test_multinode-124908-m02_multinode-124908.txt                                                              |                  |         |         |                     |                     |
	| cp      | multinode-124908 cp multinode-124908-m02:/home/docker/cp-test.txt                                                           | multinode-124908 | jenkins | v1.28.0 | 08 Jan 23 12:51 PST | 08 Jan 23 12:51 PST |
	|         | multinode-124908-m03:/home/docker/cp-test_multinode-124908-m02_multinode-124908-m03.txt                                     |                  |         |         |                     |                     |
	| ssh     | multinode-124908 ssh -n                                                                                                     | multinode-124908 | jenkins | v1.28.0 | 08 Jan 23 12:51 PST | 08 Jan 23 12:51 PST |
	|         | multinode-124908-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-124908 ssh -n multinode-124908-m03 sudo cat                                                                       | multinode-124908 | jenkins | v1.28.0 | 08 Jan 23 12:51 PST | 08 Jan 23 12:51 PST |
	|         | /home/docker/cp-test_multinode-124908-m02_multinode-124908-m03.txt                                                          |                  |         |         |                     |                     |
	| cp      | multinode-124908 cp testdata/cp-test.txt                                                                                    | multinode-124908 | jenkins | v1.28.0 | 08 Jan 23 12:51 PST | 08 Jan 23 12:51 PST |
	|         | multinode-124908-m03:/home/docker/cp-test.txt                                                                               |                  |         |         |                     |                     |
	| ssh     | multinode-124908 ssh -n                                                                                                     | multinode-124908 | jenkins | v1.28.0 | 08 Jan 23 12:51 PST | 08 Jan 23 12:51 PST |
	|         | multinode-124908-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-124908 cp multinode-124908-m03:/home/docker/cp-test.txt                                                           | multinode-124908 | jenkins | v1.28.0 | 08 Jan 23 12:51 PST | 08 Jan 23 12:51 PST |
	|         | /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile2803111938/001/cp-test_multinode-124908-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-124908 ssh -n                                                                                                     | multinode-124908 | jenkins | v1.28.0 | 08 Jan 23 12:51 PST | 08 Jan 23 12:51 PST |
	|         | multinode-124908-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-124908 cp multinode-124908-m03:/home/docker/cp-test.txt                                                           | multinode-124908 | jenkins | v1.28.0 | 08 Jan 23 12:51 PST | 08 Jan 23 12:51 PST |
	|         | multinode-124908:/home/docker/cp-test_multinode-124908-m03_multinode-124908.txt                                             |                  |         |         |                     |                     |
	| ssh     | multinode-124908 ssh -n                                                                                                     | multinode-124908 | jenkins | v1.28.0 | 08 Jan 23 12:51 PST | 08 Jan 23 12:51 PST |
	|         | multinode-124908-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-124908 ssh -n multinode-124908 sudo cat                                                                           | multinode-124908 | jenkins | v1.28.0 | 08 Jan 23 12:51 PST | 08 Jan 23 12:51 PST |
	|         | /home/docker/cp-test_multinode-124908-m03_multinode-124908.txt                                                              |                  |         |         |                     |                     |
	| cp      | multinode-124908 cp multinode-124908-m03:/home/docker/cp-test.txt                                                           | multinode-124908 | jenkins | v1.28.0 | 08 Jan 23 12:51 PST | 08 Jan 23 12:51 PST |
	|         | multinode-124908-m02:/home/docker/cp-test_multinode-124908-m03_multinode-124908-m02.txt                                     |                  |         |         |                     |                     |
	| ssh     | multinode-124908 ssh -n                                                                                                     | multinode-124908 | jenkins | v1.28.0 | 08 Jan 23 12:51 PST | 08 Jan 23 12:51 PST |
	|         | multinode-124908-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-124908 ssh -n multinode-124908-m02 sudo cat                                                                       | multinode-124908 | jenkins | v1.28.0 | 08 Jan 23 12:51 PST | 08 Jan 23 12:51 PST |
	|         | /home/docker/cp-test_multinode-124908-m03_multinode-124908-m02.txt                                                          |                  |         |         |                     |                     |
	| node    | multinode-124908 node stop m03                                                                                              | multinode-124908 | jenkins | v1.28.0 | 08 Jan 23 12:51 PST | 08 Jan 23 12:51 PST |
	| node    | multinode-124908 node start                                                                                                 | multinode-124908 | jenkins | v1.28.0 | 08 Jan 23 12:51 PST | 08 Jan 23 12:52 PST |
	|         | m03 --alsologtostderr                                                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-124908                                                                                                    | multinode-124908 | jenkins | v1.28.0 | 08 Jan 23 12:52 PST |                     |
	| stop    | -p multinode-124908                                                                                                         | multinode-124908 | jenkins | v1.28.0 | 08 Jan 23 12:52 PST | 08 Jan 23 12:52 PST |
	| start   | -p multinode-124908                                                                                                         | multinode-124908 | jenkins | v1.28.0 | 08 Jan 23 12:52 PST |                     |
	|         | --wait=true -v=8                                                                                                            |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                           |                  |         |         |                     |                     |
	| node    | list -p multinode-124908                                                                                                    | multinode-124908 | jenkins | v1.28.0 | 08 Jan 23 12:56 PST |                     |
	|---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/08 12:52:48
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.19.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 12:52:48.476511   10230 out.go:296] Setting OutFile to fd 1 ...
	I0108 12:52:48.476690   10230 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 12:52:48.476695   10230 out.go:309] Setting ErrFile to fd 2...
	I0108 12:52:48.476699   10230 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 12:52:48.476805   10230 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-2761/.minikube/bin
	I0108 12:52:48.477282   10230 out.go:303] Setting JSON to false
	I0108 12:52:48.496851   10230 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":3141,"bootTime":1673208027,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0108 12:52:48.496933   10230 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0108 12:52:48.518863   10230 out.go:177] * [multinode-124908] minikube v1.28.0 on Darwin 13.0.1
	I0108 12:52:48.562685   10230 notify.go:220] Checking for updates...
	I0108 12:52:48.584492   10230 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 12:52:48.605868   10230 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2761/kubeconfig
	I0108 12:52:48.627742   10230 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0108 12:52:48.649564   10230 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 12:52:48.670855   10230 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube
	I0108 12:52:48.692830   10230 config.go:180] Loaded profile config "multinode-124908": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 12:52:48.692882   10230 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 12:52:48.752565   10230 docker.go:137] docker version: linux-20.10.21
	I0108 12:52:48.752702   10230 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 12:52:48.893190   10230 info.go:266] docker info: {ID:5ZGC:6Z7C:CMVQ:QZDS:ZWTS:M4CQ:373D:RAJX:R4QB:QI3D:27UQ:FR5B Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:47 SystemTime:2023-01-08 20:52:48.802495891 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0108 12:52:48.915254   10230 out.go:177] * Using the docker driver based on existing profile
	I0108 12:52:48.936897   10230 start.go:294] selected driver: docker
	I0108 12:52:48.936925   10230 start.go:838] validating driver "docker" against &{Name:multinode-124908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-124908 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false
logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 12:52:48.937144   10230 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 12:52:48.937405   10230 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 12:52:49.080084   10230 info.go:266] docker info: {ID:5ZGC:6Z7C:CMVQ:QZDS:ZWTS:M4CQ:373D:RAJX:R4QB:QI3D:27UQ:FR5B Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:47 SystemTime:2023-01-08 20:52:48.989054771 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0108 12:52:49.082593   10230 start_flags.go:910] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 12:52:49.082624   10230 cni.go:95] Creating CNI manager for ""
	I0108 12:52:49.082633   10230 cni.go:156] 3 nodes found, recommending kindnet
	I0108 12:52:49.082649   10230 start_flags.go:317] config:
	{Name:multinode-124908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-124908 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false
nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 12:52:49.126296   10230 out.go:177] * Starting control plane node multinode-124908 in cluster multinode-124908
	I0108 12:52:49.147504   10230 cache.go:120] Beginning downloading kic base image for docker with docker
	I0108 12:52:49.169447   10230 out.go:177] * Pulling base image ...
	I0108 12:52:49.212470   10230 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0108 12:52:49.212524   10230 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 12:52:49.212576   10230 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I0108 12:52:49.212608   10230 cache.go:57] Caching tarball of preloaded images
	I0108 12:52:49.212817   10230 preload.go:174] Found /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 12:52:49.212842   10230 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I0108 12:52:49.213872   10230 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/config.json ...
	I0108 12:52:49.269073   10230 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 12:52:49.269089   10230 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 12:52:49.269136   10230 cache.go:193] Successfully downloaded all kic artifacts
	I0108 12:52:49.269193   10230 start.go:364] acquiring machines lock for multinode-124908: {Name:mk965de3adbf36f4b9fc247c2c9d993fbcc7d3eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 12:52:49.269287   10230 start.go:368] acquired machines lock for "multinode-124908" in 72.18µs
	I0108 12:52:49.269311   10230 start.go:96] Skipping create...Using existing machine configuration
	I0108 12:52:49.269319   10230 fix.go:55] fixHost starting: 
	I0108 12:52:49.269569   10230 cli_runner.go:164] Run: docker container inspect multinode-124908 --format={{.State.Status}}
	I0108 12:52:49.325214   10230 fix.go:103] recreateIfNeeded on multinode-124908: state=Stopped err=<nil>
	W0108 12:52:49.325247   10230 fix.go:129] unexpected machine state, will restart: <nil>
	I0108 12:52:49.368007   10230 out.go:177] * Restarting existing docker container for "multinode-124908" ...
	I0108 12:52:49.390173   10230 cli_runner.go:164] Run: docker start multinode-124908
	I0108 12:52:49.731447   10230 cli_runner.go:164] Run: docker container inspect multinode-124908 --format={{.State.Status}}
	I0108 12:52:49.792492   10230 kic.go:415] container "multinode-124908" state is running.
	I0108 12:52:49.793109   10230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-124908
	I0108 12:52:49.856775   10230 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/config.json ...
	I0108 12:52:49.857472   10230 machine.go:88] provisioning docker machine ...
	I0108 12:52:49.857522   10230 ubuntu.go:169] provisioning hostname "multinode-124908"
	I0108 12:52:49.857646   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908
	I0108 12:52:49.928096   10230 main.go:134] libmachine: Using SSH client type: native
	I0108 12:52:49.928348   10230 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51400 <nil> <nil>}
	I0108 12:52:49.928364   10230 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-124908 && echo "multinode-124908" | sudo tee /etc/hostname
	I0108 12:52:50.068101   10230 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-124908
	
	I0108 12:52:50.068246   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908
	I0108 12:52:50.132590   10230 main.go:134] libmachine: Using SSH client type: native
	I0108 12:52:50.132752   10230 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51400 <nil> <nil>}
	I0108 12:52:50.132766   10230 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-124908' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-124908/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-124908' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 12:52:50.251494   10230 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 12:52:50.251517   10230 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-2761/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-2761/.minikube}
	I0108 12:52:50.251543   10230 ubuntu.go:177] setting up certificates
	I0108 12:52:50.251552   10230 provision.go:83] configureAuth start
	I0108 12:52:50.251650   10230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-124908
	I0108 12:52:50.313533   10230 provision.go:138] copyHostCerts
	I0108 12:52:50.313583   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem
	I0108 12:52:50.313649   10230 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem, removing ...
	I0108 12:52:50.313658   10230 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem
	I0108 12:52:50.313785   10230 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem (1082 bytes)
	I0108 12:52:50.313970   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem
	I0108 12:52:50.314016   10230 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem, removing ...
	I0108 12:52:50.314021   10230 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem
	I0108 12:52:50.314085   10230 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem (1123 bytes)
	I0108 12:52:50.314205   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem
	I0108 12:52:50.314239   10230 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem, removing ...
	I0108 12:52:50.314244   10230 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem
	I0108 12:52:50.314307   10230 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem (1675 bytes)
	I0108 12:52:50.314434   10230 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem org=jenkins.multinode-124908 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-124908]
	I0108 12:52:50.380198   10230 provision.go:172] copyRemoteCerts
	I0108 12:52:50.380286   10230 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 12:52:50.380350   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908
	I0108 12:52:50.444096   10230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51400 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/multinode-124908/id_rsa Username:docker}
	I0108 12:52:50.531821   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 12:52:50.531929   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 12:52:50.552934   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 12:52:50.553022   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0108 12:52:50.572666   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 12:52:50.572782   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 12:52:50.592903   10230 provision.go:86] duration metric: configureAuth took 341.34064ms
	I0108 12:52:50.592919   10230 ubuntu.go:193] setting minikube options for container-runtime
	I0108 12:52:50.593116   10230 config.go:180] Loaded profile config "multinode-124908": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 12:52:50.593194   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908
	I0108 12:52:50.654868   10230 main.go:134] libmachine: Using SSH client type: native
	I0108 12:52:50.655037   10230 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51400 <nil> <nil>}
	I0108 12:52:50.655047   10230 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0108 12:52:50.773475   10230 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0108 12:52:50.773492   10230 ubuntu.go:71] root file system type: overlay
	I0108 12:52:50.773669   10230 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0108 12:52:50.773794   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908
	I0108 12:52:50.837942   10230 main.go:134] libmachine: Using SSH client type: native
	I0108 12:52:50.838110   10230 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51400 <nil> <nil>}
	I0108 12:52:50.838158   10230 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0108 12:52:50.963696   10230 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0108 12:52:50.963827   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908
	I0108 12:52:51.085008   10230 main.go:134] libmachine: Using SSH client type: native
	I0108 12:52:51.085170   10230 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51400 <nil> <nil>}
	I0108 12:52:51.085184   10230 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0108 12:52:51.209125   10230 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 12:52:51.209142   10230 machine.go:91] provisioned docker machine in 1.351654992s
	I0108 12:52:51.209153   10230 start.go:300] post-start starting for "multinode-124908" (driver="docker")
	I0108 12:52:51.209159   10230 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 12:52:51.209245   10230 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 12:52:51.209315   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908
	I0108 12:52:51.266923   10230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51400 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/multinode-124908/id_rsa Username:docker}
	I0108 12:52:51.354711   10230 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 12:52:51.358249   10230 command_runner.go:130] > NAME="Ubuntu"
	I0108 12:52:51.358259   10230 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0108 12:52:51.358262   10230 command_runner.go:130] > ID=ubuntu
	I0108 12:52:51.358266   10230 command_runner.go:130] > ID_LIKE=debian
	I0108 12:52:51.358270   10230 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0108 12:52:51.358274   10230 command_runner.go:130] > VERSION_ID="20.04"
	I0108 12:52:51.358278   10230 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0108 12:52:51.358283   10230 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0108 12:52:51.358287   10230 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0108 12:52:51.358297   10230 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0108 12:52:51.358301   10230 command_runner.go:130] > VERSION_CODENAME=focal
	I0108 12:52:51.358313   10230 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0108 12:52:51.358361   10230 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 12:52:51.358373   10230 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 12:52:51.358380   10230 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 12:52:51.358384   10230 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 12:52:51.358397   10230 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2761/.minikube/addons for local assets ...
	I0108 12:52:51.358486   10230 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2761/.minikube/files for local assets ...
	I0108 12:52:51.358651   10230 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem -> 40832.pem in /etc/ssl/certs
	I0108 12:52:51.358658   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem -> /etc/ssl/certs/40832.pem
	I0108 12:52:51.358838   10230 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 12:52:51.366223   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem --> /etc/ssl/certs/40832.pem (1708 bytes)
	I0108 12:52:51.383155   10230 start.go:303] post-start completed in 173.994497ms
	I0108 12:52:51.383248   10230 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 12:52:51.383323   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908
	I0108 12:52:51.439260   10230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51400 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/multinode-124908/id_rsa Username:docker}
	I0108 12:52:51.523964   10230 command_runner.go:130] > 12%!
	(MISSING)I0108 12:52:51.524042   10230 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 12:52:51.528589   10230 command_runner.go:130] > 49G
	I0108 12:52:51.528971   10230 fix.go:57] fixHost completed within 2.259674652s
	I0108 12:52:51.528983   10230 start.go:83] releasing machines lock for "multinode-124908", held for 2.259712111s
	I0108 12:52:51.529095   10230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-124908
	I0108 12:52:51.585931   10230 ssh_runner.go:195] Run: cat /version.json
	I0108 12:52:51.585962   10230 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 12:52:51.586003   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908
	I0108 12:52:51.586036   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908
	I0108 12:52:51.647239   10230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51400 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/multinode-124908/id_rsa Username:docker}
	I0108 12:52:51.647408   10230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51400 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/multinode-124908/id_rsa Username:docker}
	I0108 12:52:51.796032   10230 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0108 12:52:51.796099   10230 command_runner.go:130] > {"iso_version": "v1.28.0-1668700269-15235", "kicbase_version": "v0.0.36-1668787669-15272", "minikube_version": "v1.28.0", "commit": "c883d3041e11322fb5c977f082b70bf31015848d"}
	I0108 12:52:51.796257   10230 ssh_runner.go:195] Run: systemctl --version
	I0108 12:52:51.801440   10230 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.18)
	I0108 12:52:51.801458   10230 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0108 12:52:51.801581   10230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0108 12:52:51.809034   10230 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I0108 12:52:51.821931   10230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 12:52:51.885088   10230 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0108 12:52:51.969615   10230 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0108 12:52:51.979408   10230 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0108 12:52:51.979518   10230 command_runner.go:130] > [Unit]
	I0108 12:52:51.979528   10230 command_runner.go:130] > Description=Docker Application Container Engine
	I0108 12:52:51.979533   10230 command_runner.go:130] > Documentation=https://docs.docker.com
	I0108 12:52:51.979549   10230 command_runner.go:130] > BindsTo=containerd.service
	I0108 12:52:51.979554   10230 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0108 12:52:51.979558   10230 command_runner.go:130] > Wants=network-online.target
	I0108 12:52:51.979562   10230 command_runner.go:130] > Requires=docker.socket
	I0108 12:52:51.979566   10230 command_runner.go:130] > StartLimitBurst=3
	I0108 12:52:51.979569   10230 command_runner.go:130] > StartLimitIntervalSec=60
	I0108 12:52:51.979572   10230 command_runner.go:130] > [Service]
	I0108 12:52:51.979576   10230 command_runner.go:130] > Type=notify
	I0108 12:52:51.979579   10230 command_runner.go:130] > Restart=on-failure
	I0108 12:52:51.979585   10230 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0108 12:52:51.979596   10230 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0108 12:52:51.979603   10230 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0108 12:52:51.979608   10230 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0108 12:52:51.979614   10230 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0108 12:52:51.979622   10230 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0108 12:52:51.979629   10230 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0108 12:52:51.979644   10230 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0108 12:52:51.979650   10230 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0108 12:52:51.979662   10230 command_runner.go:130] > ExecStart=
	I0108 12:52:51.979685   10230 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0108 12:52:51.979698   10230 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0108 12:52:51.979704   10230 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0108 12:52:51.979710   10230 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0108 12:52:51.979716   10230 command_runner.go:130] > LimitNOFILE=infinity
	I0108 12:52:51.979720   10230 command_runner.go:130] > LimitNPROC=infinity
	I0108 12:52:51.979724   10230 command_runner.go:130] > LimitCORE=infinity
	I0108 12:52:51.979734   10230 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0108 12:52:51.979742   10230 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0108 12:52:51.979746   10230 command_runner.go:130] > TasksMax=infinity
	I0108 12:52:51.979750   10230 command_runner.go:130] > TimeoutStartSec=0
	I0108 12:52:51.979758   10230 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0108 12:52:51.979764   10230 command_runner.go:130] > Delegate=yes
	I0108 12:52:51.979768   10230 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0108 12:52:51.979772   10230 command_runner.go:130] > KillMode=process
	I0108 12:52:51.979782   10230 command_runner.go:130] > [Install]
	I0108 12:52:51.979788   10230 command_runner.go:130] > WantedBy=multi-user.target
	I0108 12:52:51.980188   10230 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0108 12:52:51.980262   10230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 12:52:51.990190   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 12:52:52.002251   10230 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0108 12:52:52.002263   10230 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0108 12:52:52.003132   10230 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0108 12:52:52.069656   10230 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0108 12:52:52.140848   10230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 12:52:52.205600   10230 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 12:52:52.445232   10230 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0108 12:52:52.518548   10230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 12:52:52.581781   10230 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0108 12:52:52.591462   10230 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0108 12:52:52.591569   10230 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0108 12:52:52.595434   10230 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0108 12:52:52.595444   10230 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0108 12:52:52.595451   10230 command_runner.go:130] > Device: 96h/150d	Inode: 117         Links: 1
	I0108 12:52:52.595459   10230 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0108 12:52:52.595466   10230 command_runner.go:130] > Access: 2023-01-08 20:52:51.893693381 +0000
	I0108 12:52:52.595478   10230 command_runner.go:130] > Modify: 2023-01-08 20:52:51.893693381 +0000
	I0108 12:52:52.595483   10230 command_runner.go:130] > Change: 2023-01-08 20:52:51.894693381 +0000
	I0108 12:52:52.595486   10230 command_runner.go:130] >  Birth: -
	I0108 12:52:52.595505   10230 start.go:472] Will wait 60s for crictl version
	I0108 12:52:52.595557   10230 ssh_runner.go:195] Run: sudo crictl version
	I0108 12:52:52.623617   10230 command_runner.go:130] > Version:  0.1.0
	I0108 12:52:52.623629   10230 command_runner.go:130] > RuntimeName:  docker
	I0108 12:52:52.623633   10230 command_runner.go:130] > RuntimeVersion:  20.10.21
	I0108 12:52:52.623638   10230 command_runner.go:130] > RuntimeApiVersion:  1.41.0
	I0108 12:52:52.625732   10230 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.21
	RuntimeApiVersion:  1.41.0
	I0108 12:52:52.625831   10230 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 12:52:52.652816   10230 command_runner.go:130] > 20.10.21
	I0108 12:52:52.655127   10230 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 12:52:52.682770   10230 command_runner.go:130] > 20.10.21
	I0108 12:52:52.728662   10230 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
	I0108 12:52:52.728927   10230 cli_runner.go:164] Run: docker exec -t multinode-124908 dig +short host.docker.internal
	I0108 12:52:52.843621   10230 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0108 12:52:52.843752   10230 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0108 12:52:52.848044   10230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 12:52:52.857807   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-124908
	I0108 12:52:52.914657   10230 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0108 12:52:52.914751   10230 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 12:52:52.936515   10230 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.25.3
	I0108 12:52:52.936529   10230 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.25.3
	I0108 12:52:52.936533   10230 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.25.3
	I0108 12:52:52.936541   10230 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.25.3
	I0108 12:52:52.936545   10230 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I0108 12:52:52.936550   10230 command_runner.go:130] > registry.k8s.io/pause:3.8
	I0108 12:52:52.936558   10230 command_runner.go:130] > registry.k8s.io/etcd:3.5.4-0
	I0108 12:52:52.936566   10230 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0108 12:52:52.936570   10230 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I0108 12:52:52.936574   10230 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 12:52:52.936578   10230 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0108 12:52:52.938706   10230 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0108 12:52:52.938724   10230 docker.go:543] Images already preloaded, skipping extraction
	I0108 12:52:52.938830   10230 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 12:52:52.961775   10230 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.25.3
	I0108 12:52:52.961787   10230 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.25.3
	I0108 12:52:52.961792   10230 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.25.3
	I0108 12:52:52.961796   10230 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.25.3
	I0108 12:52:52.961800   10230 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I0108 12:52:52.961805   10230 command_runner.go:130] > registry.k8s.io/pause:3.8
	I0108 12:52:52.961808   10230 command_runner.go:130] > registry.k8s.io/etcd:3.5.4-0
	I0108 12:52:52.961812   10230 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0108 12:52:52.961816   10230 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I0108 12:52:52.961821   10230 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 12:52:52.961826   10230 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0108 12:52:52.963994   10230 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0108 12:52:52.964012   10230 cache_images.go:84] Images are preloaded, skipping loading
	I0108 12:52:52.964110   10230 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0108 12:52:53.031049   10230 command_runner.go:130] > systemd
	I0108 12:52:53.033769   10230 cni.go:95] Creating CNI manager for ""
	I0108 12:52:53.033783   10230 cni.go:156] 3 nodes found, recommending kindnet
	I0108 12:52:53.033799   10230 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 12:52:53.033811   10230 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-124908 NodeName:multinode-124908 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 12:52:53.033919   10230 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-124908"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 12:52:53.033992   10230 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-124908 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:multinode-124908 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 12:52:53.034066   10230 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0108 12:52:53.041267   10230 command_runner.go:130] > kubeadm
	I0108 12:52:53.041276   10230 command_runner.go:130] > kubectl
	I0108 12:52:53.041280   10230 command_runner.go:130] > kubelet
	I0108 12:52:53.041935   10230 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 12:52:53.041998   10230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 12:52:53.049289   10230 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (478 bytes)
	I0108 12:52:53.061963   10230 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 12:52:53.074600   10230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2038 bytes)
	I0108 12:52:53.087393   10230 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0108 12:52:53.091268   10230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 12:52:53.101056   10230 certs.go:54] Setting up /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908 for IP: 192.168.58.2
	I0108 12:52:53.101174   10230 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.key
	I0108 12:52:53.101232   10230 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.key
	I0108 12:52:53.101320   10230 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/client.key
	I0108 12:52:53.101402   10230 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/apiserver.key.cee25041
	I0108 12:52:53.101467   10230 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/proxy-client.key
	I0108 12:52:53.101474   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0108 12:52:53.101504   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0108 12:52:53.101532   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0108 12:52:53.101555   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0108 12:52:53.101577   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 12:52:53.101599   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 12:52:53.101620   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 12:52:53.101654   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 12:52:53.101777   10230 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083.pem (1338 bytes)
	W0108 12:52:53.101816   10230 certs.go:384] ignoring /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083_empty.pem, impossibly tiny 0 bytes
	I0108 12:52:53.101828   10230 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 12:52:53.101861   10230 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem (1082 bytes)
	I0108 12:52:53.101897   10230 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem (1123 bytes)
	I0108 12:52:53.101932   10230 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem (1675 bytes)
	I0108 12:52:53.102010   10230 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem (1708 bytes)
	I0108 12:52:53.102045   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem -> /usr/share/ca-certificates/40832.pem
	I0108 12:52:53.102069   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 12:52:53.102091   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083.pem -> /usr/share/ca-certificates/4083.pem
	I0108 12:52:53.102576   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 12:52:53.119843   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 12:52:53.136878   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 12:52:53.154404   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 12:52:53.171856   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 12:52:53.188984   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 12:52:53.205781   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 12:52:53.223289   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 12:52:53.240581   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem --> /usr/share/ca-certificates/40832.pem (1708 bytes)
	I0108 12:52:53.258415   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 12:52:53.275736   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083.pem --> /usr/share/ca-certificates/4083.pem (1338 bytes)
	I0108 12:52:53.292619   10230 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 12:52:53.305384   10230 ssh_runner.go:195] Run: openssl version
	I0108 12:52:53.310770   10230 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0108 12:52:53.310904   10230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 12:52:53.319002   10230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 12:52:53.322950   10230 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 20:27 /usr/share/ca-certificates/minikubeCA.pem
	I0108 12:52:53.322968   10230 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:27 /usr/share/ca-certificates/minikubeCA.pem
	I0108 12:52:53.323014   10230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 12:52:53.328167   10230 command_runner.go:130] > b5213941
	I0108 12:52:53.328524   10230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 12:52:53.336219   10230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4083.pem && ln -fs /usr/share/ca-certificates/4083.pem /etc/ssl/certs/4083.pem"
	I0108 12:52:53.344122   10230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4083.pem
	I0108 12:52:53.348309   10230 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 20:32 /usr/share/ca-certificates/4083.pem
	I0108 12:52:53.348373   10230 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:32 /usr/share/ca-certificates/4083.pem
	I0108 12:52:53.348430   10230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4083.pem
	I0108 12:52:53.353404   10230 command_runner.go:130] > 51391683
	I0108 12:52:53.353799   10230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4083.pem /etc/ssl/certs/51391683.0"
	I0108 12:52:53.361562   10230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/40832.pem && ln -fs /usr/share/ca-certificates/40832.pem /etc/ssl/certs/40832.pem"
	I0108 12:52:53.369417   10230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/40832.pem
	I0108 12:52:53.373480   10230 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 20:32 /usr/share/ca-certificates/40832.pem
	I0108 12:52:53.373578   10230 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:32 /usr/share/ca-certificates/40832.pem
	I0108 12:52:53.373631   10230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/40832.pem
	I0108 12:52:53.378641   10230 command_runner.go:130] > 3ec20f2e
	I0108 12:52:53.379054   10230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/40832.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 12:52:53.386797   10230 kubeadm.go:396] StartCluster: {Name:multinode-124908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-124908 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:fals
e metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 12:52:53.386940   10230 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 12:52:53.409759   10230 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 12:52:53.417076   10230 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0108 12:52:53.417086   10230 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0108 12:52:53.417091   10230 command_runner.go:130] > /var/lib/minikube/etcd:
	I0108 12:52:53.417094   10230 command_runner.go:130] > member
	I0108 12:52:53.417745   10230 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0108 12:52:53.417764   10230 kubeadm.go:627] restartCluster start
	I0108 12:52:53.417821   10230 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 12:52:53.424840   10230 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:53.424924   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-124908
	I0108 12:52:53.505497   10230 kubeconfig.go:135] verify returned: extract IP: "multinode-124908" does not appear in /Users/jenkins/minikube-integration/15565-2761/kubeconfig
	I0108 12:52:53.505586   10230 kubeconfig.go:146] "multinode-124908" context is missing from /Users/jenkins/minikube-integration/15565-2761/kubeconfig - will repair!
	I0108 12:52:53.505820   10230 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/kubeconfig: {Name:mk71550ab701dee908d8134473648649a6392238 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 12:52:53.506248   10230 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15565-2761/kubeconfig
	I0108 12:52:53.506470   10230 kapi.go:59] client config for multinode-124908: &rest.Config{Host:"https://127.0.0.1:51399", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 12:52:53.506824   10230 cert_rotation.go:137] Starting client certificate rotation controller
	I0108 12:52:53.507014   10230 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 12:52:53.514978   10230 api_server.go:165] Checking apiserver status ...
	I0108 12:52:53.515042   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 12:52:53.524027   10230 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:53.726136   10230 api_server.go:165] Checking apiserver status ...
	I0108 12:52:53.726313   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 12:52:53.737412   10230 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:53.924812   10230 api_server.go:165] Checking apiserver status ...
	I0108 12:52:53.924981   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 12:52:53.935992   10230 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:54.125324   10230 api_server.go:165] Checking apiserver status ...
	I0108 12:52:54.125467   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 12:52:54.136333   10230 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:54.326135   10230 api_server.go:165] Checking apiserver status ...
	I0108 12:52:54.326315   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 12:52:54.337453   10230 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:54.524413   10230 api_server.go:165] Checking apiserver status ...
	I0108 12:52:54.524541   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 12:52:54.535379   10230 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:54.725393   10230 api_server.go:165] Checking apiserver status ...
	I0108 12:52:54.725598   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 12:52:54.737223   10230 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:54.926217   10230 api_server.go:165] Checking apiserver status ...
	I0108 12:52:54.926369   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 12:52:54.937621   10230 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:55.126122   10230 api_server.go:165] Checking apiserver status ...
	I0108 12:52:55.126306   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 12:52:55.137248   10230 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:55.324353   10230 api_server.go:165] Checking apiserver status ...
	I0108 12:52:55.324535   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 12:52:55.335906   10230 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:55.524744   10230 api_server.go:165] Checking apiserver status ...
	I0108 12:52:55.524921   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 12:52:55.535972   10230 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:55.726145   10230 api_server.go:165] Checking apiserver status ...
	I0108 12:52:55.726307   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 12:52:55.737509   10230 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:55.926078   10230 api_server.go:165] Checking apiserver status ...
	I0108 12:52:55.926195   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 12:52:55.937217   10230 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:56.125041   10230 api_server.go:165] Checking apiserver status ...
	I0108 12:52:56.125167   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 12:52:56.136182   10230 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:56.324879   10230 api_server.go:165] Checking apiserver status ...
	I0108 12:52:56.325061   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 12:52:56.336205   10230 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:56.525915   10230 api_server.go:165] Checking apiserver status ...
	I0108 12:52:56.526093   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 12:52:56.537187   10230 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:56.537198   10230 api_server.go:165] Checking apiserver status ...
	I0108 12:52:56.537254   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 12:52:56.545669   10230 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:56.545683   10230 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0108 12:52:56.545691   10230 kubeadm.go:1114] stopping kube-system containers ...
	I0108 12:52:56.545773   10230 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 12:52:56.569982   10230 command_runner.go:130] > 102afbd16ebe
	I0108 12:52:56.569993   10230 command_runner.go:130] > 0fdc50ce7b7b
	I0108 12:52:56.569997   10230 command_runner.go:130] > 87704622b4c0
	I0108 12:52:56.570000   10230 command_runner.go:130] > bec02388b605
	I0108 12:52:56.570004   10230 command_runner.go:130] > 5f5efd278d83
	I0108 12:52:56.570013   10230 command_runner.go:130] > e8a051889a28
	I0108 12:52:56.570017   10230 command_runner.go:130] > e1fcc1a318f0
	I0108 12:52:56.570020   10230 command_runner.go:130] > c87fa6df09c3
	I0108 12:52:56.570024   10230 command_runner.go:130] > 015d397fcc74
	I0108 12:52:56.570035   10230 command_runner.go:130] > 284f82945805
	I0108 12:52:56.570039   10230 command_runner.go:130] > 3af41681452e
	I0108 12:52:56.570042   10230 command_runner.go:130] > f321d9700124
	I0108 12:52:56.570059   10230 command_runner.go:130] > 0f0a2ebaa1f8
	I0108 12:52:56.570068   10230 command_runner.go:130] > adaa05119a60
	I0108 12:52:56.570072   10230 command_runner.go:130] > 56a7fc40cef9
	I0108 12:52:56.570075   10230 command_runner.go:130] > a8533a49b21a
	I0108 12:52:56.572104   10230 docker.go:444] Stopping containers: [102afbd16ebe 0fdc50ce7b7b 87704622b4c0 bec02388b605 5f5efd278d83 e8a051889a28 e1fcc1a318f0 c87fa6df09c3 015d397fcc74 284f82945805 3af41681452e f321d9700124 0f0a2ebaa1f8 adaa05119a60 56a7fc40cef9 a8533a49b21a]
	I0108 12:52:56.572202   10230 ssh_runner.go:195] Run: docker stop 102afbd16ebe 0fdc50ce7b7b 87704622b4c0 bec02388b605 5f5efd278d83 e8a051889a28 e1fcc1a318f0 c87fa6df09c3 015d397fcc74 284f82945805 3af41681452e f321d9700124 0f0a2ebaa1f8 adaa05119a60 56a7fc40cef9 a8533a49b21a
	I0108 12:52:56.593957   10230 command_runner.go:130] > 102afbd16ebe
	I0108 12:52:56.594159   10230 command_runner.go:130] > 0fdc50ce7b7b
	I0108 12:52:56.594170   10230 command_runner.go:130] > 87704622b4c0
	I0108 12:52:56.594175   10230 command_runner.go:130] > bec02388b605
	I0108 12:52:56.594181   10230 command_runner.go:130] > 5f5efd278d83
	I0108 12:52:56.594185   10230 command_runner.go:130] > e8a051889a28
	I0108 12:52:56.594189   10230 command_runner.go:130] > e1fcc1a318f0
	I0108 12:52:56.594194   10230 command_runner.go:130] > c87fa6df09c3
	I0108 12:52:56.594199   10230 command_runner.go:130] > 015d397fcc74
	I0108 12:52:56.594204   10230 command_runner.go:130] > 284f82945805
	I0108 12:52:56.594208   10230 command_runner.go:130] > 3af41681452e
	I0108 12:52:56.594211   10230 command_runner.go:130] > f321d9700124
	I0108 12:52:56.594216   10230 command_runner.go:130] > 0f0a2ebaa1f8
	I0108 12:52:56.594219   10230 command_runner.go:130] > adaa05119a60
	I0108 12:52:56.594224   10230 command_runner.go:130] > 56a7fc40cef9
	I0108 12:52:56.594227   10230 command_runner.go:130] > a8533a49b21a
	I0108 12:52:56.596640   10230 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 12:52:56.607237   10230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 12:52:56.614187   10230 command_runner.go:130] > -rw------- 1 root root 5639 Jan  8 20:49 /etc/kubernetes/admin.conf
	I0108 12:52:56.614198   10230 command_runner.go:130] > -rw------- 1 root root 5652 Jan  8 20:49 /etc/kubernetes/controller-manager.conf
	I0108 12:52:56.614203   10230 command_runner.go:130] > -rw------- 1 root root 2003 Jan  8 20:49 /etc/kubernetes/kubelet.conf
	I0108 12:52:56.614212   10230 command_runner.go:130] > -rw------- 1 root root 5604 Jan  8 20:49 /etc/kubernetes/scheduler.conf
	I0108 12:52:56.614896   10230 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jan  8 20:49 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jan  8 20:49 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2003 Jan  8 20:49 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan  8 20:49 /etc/kubernetes/scheduler.conf
	
	I0108 12:52:56.614961   10230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0108 12:52:56.621657   10230 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I0108 12:52:56.622412   10230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0108 12:52:56.629082   10230 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I0108 12:52:56.629737   10230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0108 12:52:56.637066   10230 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:56.637127   10230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0108 12:52:56.644154   10230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0108 12:52:56.651453   10230 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:52:56.651512   10230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0108 12:52:56.658752   10230 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 12:52:56.666322   10230 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 12:52:56.666335   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 12:52:56.710529   10230 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 12:52:56.710545   10230 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0108 12:52:56.710753   10230 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0108 12:52:56.710952   10230 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 12:52:56.711290   10230 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0108 12:52:56.711531   10230 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0108 12:52:56.711656   10230 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0108 12:52:56.711833   10230 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0108 12:52:56.711854   10230 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0108 12:52:56.712420   10230 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 12:52:56.712434   10230 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 12:52:56.712443   10230 command_runner.go:130] > [certs] Using the existing "sa" key
	I0108 12:52:56.715507   10230 command_runner.go:130] ! W0108 20:52:56.705990    1166 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0108 12:52:56.715528   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 12:52:56.758836   10230 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 12:52:56.950261   10230 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0108 12:52:57.078955   10230 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0108 12:52:57.122673   10230 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 12:52:57.178930   10230 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 12:52:57.183005   10230 command_runner.go:130] ! W0108 20:52:56.754526    1176 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0108 12:52:57.183028   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 12:52:57.237241   10230 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 12:52:57.237832   10230 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 12:52:57.237842   10230 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0108 12:52:57.311096   10230 command_runner.go:130] ! W0108 20:52:57.223346    1199 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0108 12:52:57.311118   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 12:52:57.357742   10230 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 12:52:57.357754   10230 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 12:52:57.359538   10230 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 12:52:57.360408   10230 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 12:52:57.364205   10230 command_runner.go:130] ! W0108 20:52:57.352313    1233 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0108 12:52:57.364231   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 12:52:57.451663   10230 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 12:52:57.460408   10230 command_runner.go:130] ! W0108 20:52:57.446092    1248 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0108 12:52:57.460441   10230 api_server.go:51] waiting for apiserver process to appear ...
	I0108 12:52:57.460508   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 12:52:57.973186   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 12:52:58.471789   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 12:52:58.971735   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 12:52:58.982583   10230 command_runner.go:130] > 1732
	I0108 12:52:58.983279   10230 api_server.go:71] duration metric: took 1.522857133s to wait for apiserver process to appear ...
	I0108 12:52:58.983292   10230 api_server.go:87] waiting for apiserver healthz status ...
	I0108 12:52:58.983327   10230 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51399/healthz ...
	I0108 12:53:01.508071   10230 api_server.go:278] https://127.0.0.1:51399/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 12:53:01.508100   10230 api_server.go:102] status: https://127.0.0.1:51399/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 12:53:02.008480   10230 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51399/healthz ...
	I0108 12:53:02.015669   10230 api_server.go:278] https://127.0.0.1:51399/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 12:53:02.015688   10230 api_server.go:102] status: https://127.0.0.1:51399/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 12:53:02.508418   10230 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51399/healthz ...
	I0108 12:53:02.515192   10230 api_server.go:278] https://127.0.0.1:51399/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 12:53:02.515211   10230 api_server.go:102] status: https://127.0.0.1:51399/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 12:53:03.008292   10230 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51399/healthz ...
	I0108 12:53:03.014084   10230 api_server.go:278] https://127.0.0.1:51399/healthz returned 200:
	ok
	I0108 12:53:03.014144   10230 round_trippers.go:463] GET https://127.0.0.1:51399/version
	I0108 12:53:03.014150   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:03.014158   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:03.014168   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:03.021122   10230 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0108 12:53:03.021134   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:03.021141   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:03.021147   10230 round_trippers.go:580]     Content-Length: 263
	I0108 12:53:03.021153   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:03 GMT
	I0108 12:53:03.021159   10230 round_trippers.go:580]     Audit-Id: 684e78a1-475c-44d5-a7ff-e3c29595183b
	I0108 12:53:03.021164   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:03.021169   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:03.021173   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:03.021196   10230 request.go:1154] Response Body: {
	  "major": "1",
	  "minor": "25",
	  "gitVersion": "v1.25.3",
	  "gitCommit": "434bfd82814af038ad94d62ebe59b133fcb50506",
	  "gitTreeState": "clean",
	  "buildDate": "2022-10-12T10:49:09Z",
	  "goVersion": "go1.19.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0108 12:53:03.021255   10230 api_server.go:140] control plane version: v1.25.3
	I0108 12:53:03.021264   10230 api_server.go:130] duration metric: took 4.038014039s to wait for apiserver health ...
	I0108 12:53:03.021271   10230 cni.go:95] Creating CNI manager for ""
	I0108 12:53:03.021276   10230 cni.go:156] 3 nodes found, recommending kindnet
	I0108 12:53:03.042576   10230 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 12:53:03.062708   10230 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 12:53:03.067603   10230 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0108 12:53:03.067617   10230 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0108 12:53:03.067622   10230 command_runner.go:130] > Device: 8eh/142d	Inode: 267161      Links: 1
	I0108 12:53:03.067627   10230 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 12:53:03.067637   10230 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0108 12:53:03.067644   10230 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0108 12:53:03.067650   10230 command_runner.go:130] > Change: 2023-01-08 20:27:37.453848555 +0000
	I0108 12:53:03.067653   10230 command_runner.go:130] >  Birth: -
	I0108 12:53:03.067704   10230 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0108 12:53:03.067711   10230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0108 12:53:03.082948   10230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 12:53:04.549960   10230 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0108 12:53:04.552431   10230 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0108 12:53:04.554814   10230 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0108 12:53:04.570180   10230 command_runner.go:130] > daemonset.apps/kindnet configured
	I0108 12:53:04.637826   10230 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.554871283s)
	I0108 12:53:04.637859   10230 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 12:53:04.637934   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods
	I0108 12:53:04.637942   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:04.637951   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:04.637958   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:04.642031   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:04.642049   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:04.642057   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:04 GMT
	I0108 12:53:04.642064   10230 round_trippers.go:580]     Audit-Id: 31994e7b-307a-4a94-9abd-851474259fcb
	I0108 12:53:04.642070   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:04.642077   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:04.642084   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:04.642091   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:04.643605   10230 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"696"},"items":[{"metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"696","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84480 chars]
	I0108 12:53:04.646687   10230 system_pods.go:59] 12 kube-system pods found
	I0108 12:53:04.646703   10230 system_pods.go:61] "coredns-565d847f94-f6gqj" [1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 12:53:04.646713   10230 system_pods.go:61] "etcd-multinode-124908" [9cf1a608-48d9-453e-bd35-263521e756e4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 12:53:04.646718   10230 system_pods.go:61] "kindnet-4j92t" [2e0611f9-b324-4059-b858-ca1cc99bb8d9] Running
	I0108 12:53:04.646722   10230 system_pods.go:61] "kindnet-79h6s" [8899610c-9df6-488d-af2f-2848f1ce546b] Running
	I0108 12:53:04.646733   10230 system_pods.go:61] "kindnet-pj4l5" [82ac6efa-2268-472b-bd72-171778eabeb6] Running
	I0108 12:53:04.646738   10230 system_pods.go:61] "kube-apiserver-multinode-124908" [7e7e7fa5-c965-4737-83b1-afd48eb87547] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 12:53:04.646742   10230 system_pods.go:61] "kube-controller-manager-multinode-124908" [41ff8cf2-6b35-47c2-8f48-120e6adf98bb] Running
	I0108 12:53:04.646760   10230 system_pods.go:61] "kube-proxy-hq6ms" [3deaa832-bac0-47e3-bdef-482b094bf90f] Running
	I0108 12:53:04.646768   10230 system_pods.go:61] "kube-proxy-kzv6k" [05a4b261-aa83-4e23-83c6-0a50d659b5b7] Running
	I0108 12:53:04.646772   10230 system_pods.go:61] "kube-proxy-vx6bb" [7bff7041-dbf7-4143-9f70-52a12dd69f64] Running
	I0108 12:53:04.646779   10230 system_pods.go:61] "kube-scheduler-multinode-124908" [3dd0df78-6cad-4b47-a66f-74c412846b79] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 12:53:04.646787   10230 system_pods.go:61] "storage-provisioner" [6eda9f8e-814b-4a17-9ec8-89bd52973d7b] Running
	I0108 12:53:04.646792   10230 system_pods.go:74] duration metric: took 8.929012ms to wait for pod list to return data ...
	I0108 12:53:04.646797   10230 node_conditions.go:102] verifying NodePressure condition ...
	I0108 12:53:04.646846   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes
	I0108 12:53:04.646851   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:04.646857   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:04.646864   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:04.650155   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:04.650168   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:04.650174   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:04.650179   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:04.650184   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:04.650188   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:04.650193   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:04 GMT
	I0108 12:53:04.650198   10230 round_trippers.go:580]     Audit-Id: e15e64ec-6068-491f-918b-1d2b6500b142
	I0108 12:53:04.650357   10230 request.go:1154] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"696"},"items":[{"metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 16257 chars]
	I0108 12:53:04.650959   10230 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0108 12:53:04.650970   10230 node_conditions.go:123] node cpu capacity is 6
	I0108 12:53:04.650980   10230 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0108 12:53:04.650983   10230 node_conditions.go:123] node cpu capacity is 6
	I0108 12:53:04.650987   10230 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0108 12:53:04.650990   10230 node_conditions.go:123] node cpu capacity is 6
	I0108 12:53:04.650993   10230 node_conditions.go:105] duration metric: took 4.191267ms to run NodePressure ...
	I0108 12:53:04.651011   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 12:53:04.845350   10230 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0108 12:53:04.878968   10230 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0108 12:53:04.882462   10230 command_runner.go:130] ! W0108 20:53:04.695681    2591 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0108 12:53:04.882486   10230 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0108 12:53:04.882545   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0108 12:53:04.882550   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:04.882561   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:04.882568   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:04.885663   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:04.885676   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:04.885685   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:04 GMT
	I0108 12:53:04.885693   10230 round_trippers.go:580]     Audit-Id: c473700b-a017-49f4-83df-67c734502ca2
	I0108 12:53:04.885701   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:04.885711   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:04.885721   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:04.885729   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:04.886121   10230 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"699"},"items":[{"metadata":{"name":"etcd-multinode-124908","namespace":"kube-system","uid":"9cf1a608-48d9-453e-bd35-263521e756e4","resourceVersion":"691","creationTimestamp":"2023-01-08T20:49:35Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"83cad18480e9029408294e1fc4223245","kubernetes.io/config.mirror":"83cad18480e9029408294e1fc4223245","kubernetes.io/config.seen":"2023-01-08T20:49:35.642390520Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 30901 chars]
	I0108 12:53:04.886873   10230 kubeadm.go:778] kubelet initialised
	I0108 12:53:04.886884   10230 kubeadm.go:779] duration metric: took 4.38897ms waiting for restarted kubelet to initialise ...
	I0108 12:53:04.886890   10230 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 12:53:04.886946   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods
	I0108 12:53:04.886951   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:04.886957   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:04.886963   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:04.890935   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:04.890950   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:04.890958   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:04 GMT
	I0108 12:53:04.890965   10230 round_trippers.go:580]     Audit-Id: 77394af1-9320-4fcb-a335-0542b5bf9807
	I0108 12:53:04.890972   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:04.890978   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:04.890985   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:04.890992   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:04.893087   10230 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"699"},"items":[{"metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"696","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84895 chars]
	I0108 12:53:04.895018   10230 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-f6gqj" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:04.895054   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:04.895059   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:04.895077   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:04.895085   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:04.897564   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:04.897577   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:04.897584   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:04.897590   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:04.897596   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:04.897601   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:04.897606   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:04 GMT
	I0108 12:53:04.897611   10230 round_trippers.go:580]     Audit-Id: 579029bf-6ccb-4889-aa93-21f8ce892022
	I0108 12:53:04.897668   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"696","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6604 chars]
	I0108 12:53:04.897931   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:04.897939   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:04.897945   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:04.897952   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:04.900086   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:04.900096   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:04.900103   10230 round_trippers.go:580]     Audit-Id: 86f0dc24-9c17-4908-b9e5-5fae44248ba2
	I0108 12:53:04.900108   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:04.900114   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:04.900119   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:04.900124   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:04.900129   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:04 GMT
	I0108 12:53:04.900197   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:05.400935   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:05.400956   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:05.400969   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:05.400979   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:05.405074   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:05.405091   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:05.405099   10230 round_trippers.go:580]     Audit-Id: e3be4fc9-a8f6-44a1-82d5-41b4825949b0
	I0108 12:53:05.405113   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:05.405121   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:05.405128   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:05.405135   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:05.405141   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:05 GMT
	I0108 12:53:05.405209   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"696","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6604 chars]
	I0108 12:53:05.405534   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:05.405540   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:05.405548   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:05.405558   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:05.407498   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:05.407507   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:05.407513   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:05.407518   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:05.407523   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:05 GMT
	I0108 12:53:05.407528   10230 round_trippers.go:580]     Audit-Id: 83c4da48-d398-44b3-a3ca-a31158707127
	I0108 12:53:05.407533   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:05.407538   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:05.407588   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:05.902054   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:05.902079   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:05.902092   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:05.902102   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:05.906032   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:05.906047   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:05.906055   10230 round_trippers.go:580]     Audit-Id: 65c8bf4a-eb01-4cda-84da-76f08cf94ff0
	I0108 12:53:05.906061   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:05.906070   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:05.906079   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:05.906098   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:05.906111   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:05 GMT
	I0108 12:53:05.906368   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"696","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6604 chars]
	I0108 12:53:05.906675   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:05.906682   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:05.906688   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:05.906693   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:05.909029   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:05.909039   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:05.909045   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:05.909052   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:05 GMT
	I0108 12:53:05.909057   10230 round_trippers.go:580]     Audit-Id: c23fb512-753a-4896-8486-8854de091847
	I0108 12:53:05.909064   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:05.909069   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:05.909073   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:05.909122   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:06.402304   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:06.402329   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:06.402352   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:06.402388   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:06.406842   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:06.406861   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:06.406869   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:06.406877   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:06 GMT
	I0108 12:53:06.406885   10230 round_trippers.go:580]     Audit-Id: 339cd8f4-81cd-44e7-bf20-7216f87a83c8
	I0108 12:53:06.406891   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:06.406898   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:06.406906   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:06.407359   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"696","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6604 chars]
	I0108 12:53:06.407753   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:06.407760   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:06.407768   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:06.407773   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:06.410095   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:06.410104   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:06.410111   10230 round_trippers.go:580]     Audit-Id: 4281cb28-18c6-4781-a656-d1f21c01eaf8
	I0108 12:53:06.410116   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:06.410121   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:06.410126   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:06.410131   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:06.410138   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:06 GMT
	I0108 12:53:06.410193   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:06.900717   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:06.900744   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:06.900756   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:06.900766   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:06.904846   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:06.904858   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:06.904863   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:06 GMT
	I0108 12:53:06.904868   10230 round_trippers.go:580]     Audit-Id: 6116a822-e898-450e-be8e-b2c7c03aef4c
	I0108 12:53:06.904872   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:06.904877   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:06.904882   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:06.904887   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:06.904943   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:06.905233   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:06.905240   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:06.905246   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:06.905251   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:06.907608   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:06.907618   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:06.907624   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:06.907628   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:06.907634   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:06.907639   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:06.907644   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:06 GMT
	I0108 12:53:06.907648   10230 round_trippers.go:580]     Audit-Id: 1223433b-d1ec-41e2-9007-94cb5d53b27d
	I0108 12:53:06.907707   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:06.907900   10230 pod_ready.go:102] pod "coredns-565d847f94-f6gqj" in "kube-system" namespace has status "Ready":"False"
	I0108 12:53:07.401670   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:07.401699   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:07.401712   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:07.401722   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:07.406266   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:07.406281   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:07.406289   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:07.406295   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:07.406304   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:07.406310   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:07.406317   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:07 GMT
	I0108 12:53:07.406325   10230 round_trippers.go:580]     Audit-Id: 8a440171-1b9d-4511-af18-45eade58537f
	I0108 12:53:07.406397   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:07.406685   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:07.406694   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:07.406700   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:07.406705   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:07.408851   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:07.408860   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:07.408866   10230 round_trippers.go:580]     Audit-Id: deee1fdb-76a1-4950-9ca1-7e5ea74d29fd
	I0108 12:53:07.408874   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:07.408879   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:07.408884   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:07.408891   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:07.408896   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:07 GMT
	I0108 12:53:07.408941   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:07.900978   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:07.901004   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:07.901041   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:07.901053   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:07.905443   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:07.905460   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:07.905468   10230 round_trippers.go:580]     Audit-Id: 43aba184-11ec-4d7b-982a-4e20db65c4d3
	I0108 12:53:07.905481   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:07.905488   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:07.905495   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:07.905505   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:07.905512   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:07 GMT
	I0108 12:53:07.905599   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:07.905888   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:07.905894   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:07.905900   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:07.905906   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:07.908057   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:07.908068   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:07.908080   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:07.908086   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:07 GMT
	I0108 12:53:07.908091   10230 round_trippers.go:580]     Audit-Id: 3c940629-e21a-4abe-a17d-0a67c0770595
	I0108 12:53:07.908096   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:07.908101   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:07.908107   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:07.908298   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:08.400643   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:08.400665   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:08.400679   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:08.400690   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:08.404941   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:08.404954   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:08.404960   10230 round_trippers.go:580]     Audit-Id: 24dbf2c6-f457-4781-9340-3566268bb28b
	I0108 12:53:08.404965   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:08.404970   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:08.404974   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:08.404980   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:08.404984   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:08 GMT
	I0108 12:53:08.405046   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:08.405344   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:08.405350   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:08.405357   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:08.405362   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:08.407621   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:08.407631   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:08.407637   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:08 GMT
	I0108 12:53:08.407642   10230 round_trippers.go:580]     Audit-Id: 081ba777-2efa-4033-a574-2cadbc586a4f
	I0108 12:53:08.407647   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:08.407652   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:08.407657   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:08.407662   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:08.407715   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:08.902591   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:08.902617   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:08.902630   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:08.902639   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:08.906951   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:08.906973   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:08.906982   10230 round_trippers.go:580]     Audit-Id: e01444bd-a440-458c-b67e-93df79a1beba
	I0108 12:53:08.906989   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:08.906995   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:08.907009   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:08.907016   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:08.907022   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:08 GMT
	I0108 12:53:08.907096   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:08.907476   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:08.907483   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:08.907489   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:08.907498   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:08.909344   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:08.909354   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:08.909361   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:08 GMT
	I0108 12:53:08.909366   10230 round_trippers.go:580]     Audit-Id: a59a0b1a-a7b5-477c-a1b5-e9c09253faaf
	I0108 12:53:08.909372   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:08.909377   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:08.909381   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:08.909386   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:08.909439   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:08.909628   10230 pod_ready.go:102] pod "coredns-565d847f94-f6gqj" in "kube-system" namespace has status "Ready":"False"
	I0108 12:53:09.402626   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:09.402646   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:09.402659   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:09.402669   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:09.406895   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:09.406912   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:09.406920   10230 round_trippers.go:580]     Audit-Id: fa404f6f-1d6f-452f-ad28-7dfda2c3794f
	I0108 12:53:09.406930   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:09.406940   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:09.406953   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:09.406960   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:09.406967   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:09 GMT
	I0108 12:53:09.407044   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:09.407336   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:09.407342   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:09.407348   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:09.407353   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:09.409424   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:09.409435   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:09.409440   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:09.409445   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:09.409450   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:09.409455   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:09 GMT
	I0108 12:53:09.409459   10230 round_trippers.go:580]     Audit-Id: 149cf3f3-ebea-4510-957a-6df1827dcd92
	I0108 12:53:09.409465   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:09.409529   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:09.901516   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:09.901542   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:09.901555   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:09.901565   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:09.905927   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:09.905941   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:09.905947   10230 round_trippers.go:580]     Audit-Id: 08e1f89e-bf09-4917-b4be-2405407e5b92
	I0108 12:53:09.905952   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:09.905956   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:09.905964   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:09.905970   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:09.905975   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:09 GMT
	I0108 12:53:09.906031   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:09.906333   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:09.906339   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:09.906346   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:09.906351   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:09.908413   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:09.908422   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:09.908429   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:09.908436   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:09.908442   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:09.908446   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:09.908451   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:09 GMT
	I0108 12:53:09.908457   10230 round_trippers.go:580]     Audit-Id: 1eb2bcbc-47e0-44f2-91b6-e5637f4fb736
	I0108 12:53:09.908520   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:10.400619   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:10.400643   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:10.400656   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:10.400667   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:10.405116   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:10.405130   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:10.405136   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:10.405141   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:10.405146   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:10.405151   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:10 GMT
	I0108 12:53:10.405156   10230 round_trippers.go:580]     Audit-Id: 63139a02-5739-4763-96a1-bd788d59767d
	I0108 12:53:10.405160   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:10.405213   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:10.405504   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:10.405511   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:10.405517   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:10.405523   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:10.407546   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:10.407556   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:10.407563   10230 round_trippers.go:580]     Audit-Id: d1a0c967-be63-4200-b491-469feaafe4fc
	I0108 12:53:10.407568   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:10.407573   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:10.407578   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:10.407585   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:10.407591   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:10 GMT
	I0108 12:53:10.407641   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:10.902569   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:10.902595   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:10.902607   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:10.902617   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:10.907231   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:10.907244   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:10.907249   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:10.907254   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:10.907259   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:10 GMT
	I0108 12:53:10.907264   10230 round_trippers.go:580]     Audit-Id: 0d6beea5-4cb5-441c-a93e-bb1efecf7a72
	I0108 12:53:10.907269   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:10.907274   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:10.907328   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:10.907626   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:10.907633   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:10.907639   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:10.907644   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:10.910064   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:10.910075   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:10.910080   10230 round_trippers.go:580]     Audit-Id: 4bdb1e23-2f77-4a23-b89b-0e2150a86135
	I0108 12:53:10.910085   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:10.910090   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:10.910095   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:10.910100   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:10.910105   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:10 GMT
	I0108 12:53:10.910165   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:10.910353   10230 pod_ready.go:102] pod "coredns-565d847f94-f6gqj" in "kube-system" namespace has status "Ready":"False"
	I0108 12:53:11.401597   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:11.401619   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:11.401633   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:11.401643   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:11.405721   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:11.405737   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:11.405745   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:11.405758   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:11.405766   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:11.405773   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:11 GMT
	I0108 12:53:11.405780   10230 round_trippers.go:580]     Audit-Id: 2f8b6627-8567-43d3-91e1-53a91ad6cb75
	I0108 12:53:11.405786   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:11.405860   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:11.406195   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:11.406201   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:11.406207   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:11.406212   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:11.408410   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:11.408422   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:11.408428   10230 round_trippers.go:580]     Audit-Id: 454b8085-ce61-4818-89cc-69dc6d74824d
	I0108 12:53:11.408433   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:11.408439   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:11.408447   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:11.408454   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:11.408459   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:11 GMT
	I0108 12:53:11.408517   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:11.900520   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:11.900549   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:11.900564   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:11.900608   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:11.904836   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:11.904848   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:11.904854   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:11.904859   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:11.904863   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:11 GMT
	I0108 12:53:11.904868   10230 round_trippers.go:580]     Audit-Id: 0fc78cf7-45dd-4e4e-8d56-6159d7c62129
	I0108 12:53:11.904873   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:11.904878   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:11.904934   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:11.905227   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:11.905233   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:11.905239   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:11.905244   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:11.907630   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:11.907640   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:11.907646   10230 round_trippers.go:580]     Audit-Id: e905667e-5088-4b9d-9ec5-d9264d15e70a
	I0108 12:53:11.907651   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:11.907656   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:11.907661   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:11.907666   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:11.907671   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:11 GMT
	I0108 12:53:11.907721   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:12.401548   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:12.401575   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:12.401588   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:12.401598   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:12.406018   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:12.406031   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:12.406036   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:12.406041   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:12.406045   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:12.406050   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:12.406055   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:12 GMT
	I0108 12:53:12.406060   10230 round_trippers.go:580]     Audit-Id: cfd9fefd-0bd8-474f-b1d7-cf82c87d0e38
	I0108 12:53:12.406120   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:12.406404   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:12.406410   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:12.406416   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:12.406421   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:12.408206   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:12.408216   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:12.408222   10230 round_trippers.go:580]     Audit-Id: 1611999b-3e7e-4c6a-9af2-fde2c6533874
	I0108 12:53:12.408227   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:12.408232   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:12.408237   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:12.408242   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:12.408248   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:12 GMT
	I0108 12:53:12.408671   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:12.901420   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:12.901444   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:12.901456   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:12.901466   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:12.905575   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:12.905588   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:12.905593   10230 round_trippers.go:580]     Audit-Id: 9ec13795-c602-4cda-a089-66513d4fe34b
	I0108 12:53:12.905605   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:12.905610   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:12.905615   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:12.905620   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:12.905625   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:12 GMT
	I0108 12:53:12.905683   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:12.905972   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:12.905979   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:12.905985   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:12.905991   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:12.907998   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:12.908007   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:12.908013   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:12 GMT
	I0108 12:53:12.908018   10230 round_trippers.go:580]     Audit-Id: 25939f30-fa79-4b2b-b819-67c364f92dce
	I0108 12:53:12.908023   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:12.908027   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:12.908032   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:12.908037   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:12.908084   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:13.401642   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:13.401663   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:13.401676   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:13.401687   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:13.406260   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:13.406275   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:13.406281   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:13.406285   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:13.406290   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:13.406295   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:13 GMT
	I0108 12:53:13.406301   10230 round_trippers.go:580]     Audit-Id: e39a8098-b9cf-4481-b585-f0ce7307d0e8
	I0108 12:53:13.406307   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:13.406363   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:13.406647   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:13.406654   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:13.406660   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:13.406666   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:13.408997   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:13.409007   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:13.409012   10230 round_trippers.go:580]     Audit-Id: e8d6252b-53be-4cfd-b6a3-5ea2ceff75e5
	I0108 12:53:13.409018   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:13.409023   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:13.409028   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:13.409033   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:13.409038   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:13 GMT
	I0108 12:53:13.409079   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:13.409264   10230 pod_ready.go:102] pod "coredns-565d847f94-f6gqj" in "kube-system" namespace has status "Ready":"False"
	I0108 12:53:13.902095   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:13.902121   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:13.902134   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:13.902143   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:13.906228   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:13.906245   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:13.906253   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:13.906266   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:13.906273   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:13.906280   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:13 GMT
	I0108 12:53:13.906288   10230 round_trippers.go:580]     Audit-Id: a5efbfde-e311-4de7-b8f6-2b846d8c7db9
	I0108 12:53:13.906296   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:13.906376   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:13.906673   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:13.906681   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:13.906690   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:13.906697   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:13.909030   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:13.909043   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:13.909050   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:13.909055   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:13.909060   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:13.909065   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:13 GMT
	I0108 12:53:13.909070   10230 round_trippers.go:580]     Audit-Id: c4252b24-d6c4-4dd9-90d2-3573f3a69d4c
	I0108 12:53:13.909074   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:13.909208   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:14.400576   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:14.400591   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:14.400598   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:14.400603   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:14.404714   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:14.404728   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:14.404734   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:14 GMT
	I0108 12:53:14.404739   10230 round_trippers.go:580]     Audit-Id: 31fba4f1-e4f3-43e7-9059-894e1dbef4e2
	I0108 12:53:14.404745   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:14.404750   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:14.404755   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:14.404760   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:14.404812   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:14.405107   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:14.405114   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:14.405120   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:14.405125   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:14.407713   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:14.407724   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:14.407730   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:14 GMT
	I0108 12:53:14.407736   10230 round_trippers.go:580]     Audit-Id: d3a50b73-649e-47c3-872b-5cb58ef985ca
	I0108 12:53:14.407742   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:14.407746   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:14.407752   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:14.407757   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:14.407803   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:14.901611   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:14.901638   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:14.901651   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:14.901662   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:14.906384   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:14.906396   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:14.906402   10230 round_trippers.go:580]     Audit-Id: e4ca0c28-4aad-478d-bbfc-803d8ec54ad6
	I0108 12:53:14.906407   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:14.906412   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:14.906417   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:14.906422   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:14.906427   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:14 GMT
	I0108 12:53:14.906488   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:14.906781   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:14.906788   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:14.906794   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:14.906799   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:14.909111   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:14.909120   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:14.909125   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:14.909130   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:14 GMT
	I0108 12:53:14.909135   10230 round_trippers.go:580]     Audit-Id: 34441076-77ce-49c2-a7b5-98828c7da87c
	I0108 12:53:14.909140   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:14.909145   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:14.909150   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:14.909204   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:15.400545   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:15.400566   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:15.400580   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:15.400590   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:15.404808   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:15.404825   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:15.404833   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:15 GMT
	I0108 12:53:15.404841   10230 round_trippers.go:580]     Audit-Id: 6abc98d0-297e-420b-99db-0b701ea3216e
	I0108 12:53:15.404847   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:15.404854   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:15.404861   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:15.404867   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:15.404936   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:15.405256   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:15.405263   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:15.405269   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:15.405281   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:15.407356   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:15.407370   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:15.407380   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:15.407394   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:15.407402   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:15 GMT
	I0108 12:53:15.407407   10230 round_trippers.go:580]     Audit-Id: 34f612f5-85e1-409f-90c1-7de9fe87a42b
	I0108 12:53:15.407412   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:15.407420   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:15.407605   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:15.901013   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:15.901036   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:15.901049   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:15.901058   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:15.905050   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:15.905066   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:15.905074   10230 round_trippers.go:580]     Audit-Id: 91030e44-92b8-4aa9-ab1a-cf0650784ee0
	I0108 12:53:15.905081   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:15.905088   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:15.905097   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:15.905106   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:15.905114   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:15 GMT
	I0108 12:53:15.905201   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:15.905486   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:15.905492   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:15.905498   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:15.905517   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:15.907705   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:15.907714   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:15.907720   10230 round_trippers.go:580]     Audit-Id: 63dcdbd4-079b-44bf-b05d-4dd5cf1a927c
	I0108 12:53:15.907725   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:15.907731   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:15.907736   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:15.907741   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:15.907746   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:15 GMT
	I0108 12:53:15.907821   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:15.908007   10230 pod_ready.go:102] pod "coredns-565d847f94-f6gqj" in "kube-system" namespace has status "Ready":"False"
	I0108 12:53:16.401785   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:16.401810   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:16.401823   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:16.401834   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:16.406307   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:16.406323   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:16.406332   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:16.406338   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:16.406346   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:16 GMT
	I0108 12:53:16.406369   10230 round_trippers.go:580]     Audit-Id: 9e9fd6f6-7ba7-4dd5-ada7-e6d2c1c283a7
	I0108 12:53:16.406374   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:16.406379   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:16.406432   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:16.406720   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:16.406727   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:16.406733   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:16.406739   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:16.408737   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:16.408746   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:16.408753   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:16.408759   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:16.408766   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:16.408771   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:16.408776   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:16 GMT
	I0108 12:53:16.408780   10230 round_trippers.go:580]     Audit-Id: 4fd2ae85-585f-4575-b8f5-2ca56f54ea61
	I0108 12:53:16.408844   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:16.902488   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:16.902511   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:16.902523   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:16.902534   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:16.906420   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:16.906435   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:16.906443   10230 round_trippers.go:580]     Audit-Id: 15ca3d61-78df-40cb-b805-425d65d48bd2
	I0108 12:53:16.906450   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:16.906458   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:16.906467   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:16.906476   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:16.906482   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:16 GMT
	I0108 12:53:16.906891   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:16.907185   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:16.907196   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:16.907205   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:16.907213   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:16.908795   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:16.908806   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:16.908814   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:16.908820   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:16.908825   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:16.908829   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:16.908843   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:16 GMT
	I0108 12:53:16.908851   10230 round_trippers.go:580]     Audit-Id: 80c2470e-71d5-43c6-a0ab-09b0ffef8725
	I0108 12:53:16.908992   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:17.402468   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:17.402487   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:17.402499   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:17.402510   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:17.406665   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:17.406682   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:17.406690   10230 round_trippers.go:580]     Audit-Id: 2cb32221-fa9f-4e97-897e-55105c794b4a
	I0108 12:53:17.406699   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:17.406719   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:17.406726   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:17.406735   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:17.406743   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:17 GMT
	I0108 12:53:17.406825   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:17.407156   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:17.407163   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:17.407169   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:17.407174   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:17.409602   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:17.409613   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:17.409619   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:17.409627   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:17 GMT
	I0108 12:53:17.409633   10230 round_trippers.go:580]     Audit-Id: 2d38e923-0650-441e-a2a6-63f10808e1aa
	I0108 12:53:17.409645   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:17.409651   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:17.409656   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:17.409800   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:17.902474   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:17.902502   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:17.902515   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:17.902526   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:17.906759   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:17.906775   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:17.906783   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:17.906790   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:17.906797   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:17 GMT
	I0108 12:53:17.906803   10230 round_trippers.go:580]     Audit-Id: c3268ac1-ae94-4431-a034-f6fdd1206609
	I0108 12:53:17.906814   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:17.906821   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:17.906889   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:17.907199   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:17.907206   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:17.907212   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:17.907217   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:17.909451   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:17.909461   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:17.909467   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:17.909472   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:17 GMT
	I0108 12:53:17.909477   10230 round_trippers.go:580]     Audit-Id: 0873bcf6-604a-4c02-8aac-ef60aee2ca2e
	I0108 12:53:17.909482   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:17.909487   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:17.909492   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:17.909538   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:17.909718   10230 pod_ready.go:102] pod "coredns-565d847f94-f6gqj" in "kube-system" namespace has status "Ready":"False"
	I0108 12:53:18.400413   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:18.400437   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:18.400450   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:18.400461   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:18.404469   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:18.404479   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:18.404484   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:18.404489   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:18.404494   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:18.404499   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:18 GMT
	I0108 12:53:18.404505   10230 round_trippers.go:580]     Audit-Id: 7b014a1a-48a2-4559-beca-dabd8cf065d5
	I0108 12:53:18.404509   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:18.404555   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:18.404833   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:18.404840   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:18.404846   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:18.404851   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:18.407012   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:18.407020   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:18.407026   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:18.407031   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:18.407036   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:18 GMT
	I0108 12:53:18.407041   10230 round_trippers.go:580]     Audit-Id: 4ca59c1b-9703-428d-8102-77abcc326ad3
	I0108 12:53:18.407047   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:18.407051   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:18.407177   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:18.901752   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:18.901778   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:18.901801   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:18.901812   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:18.906416   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:18.906432   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:18.906440   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:18.906446   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:18.906452   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:18.906460   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:18 GMT
	I0108 12:53:18.906466   10230 round_trippers.go:580]     Audit-Id: 4857704d-7df2-4e13-8c75-3d95e18fc015
	I0108 12:53:18.906473   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:18.906547   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:18.906844   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:18.906850   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:18.906856   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:18.906870   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:18.909028   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:18.909037   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:18.909044   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:18 GMT
	I0108 12:53:18.909049   10230 round_trippers.go:580]     Audit-Id: 7ee56434-087c-45a3-ac07-e6af86325d0d
	I0108 12:53:18.909056   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:18.909061   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:18.909065   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:18.909070   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:18.909121   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:19.402461   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:19.402484   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:19.402497   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:19.402508   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:19.406907   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:19.406920   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:19.406925   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:19.406930   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:19.406935   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:19 GMT
	I0108 12:53:19.406940   10230 round_trippers.go:580]     Audit-Id: dfcf0c57-1209-4908-a86d-aecf2d920be0
	I0108 12:53:19.406945   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:19.406949   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:19.406998   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:19.407286   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:19.407294   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:19.407301   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:19.407306   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:19.409219   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:19.409229   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:19.409236   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:19.409242   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:19.409247   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:19 GMT
	I0108 12:53:19.409253   10230 round_trippers.go:580]     Audit-Id: 6ac1d95d-7850-4b69-a979-ef2961c21f6a
	I0108 12:53:19.409259   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:19.409266   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:19.409489   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:19.902438   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:19.902461   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:19.902474   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:19.902484   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:19.906837   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:19.906851   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:19.906858   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:19.906862   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:19.906868   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:19.906873   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:19.906878   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:19 GMT
	I0108 12:53:19.906883   10230 round_trippers.go:580]     Audit-Id: 58c006a8-c967-48ac-8a69-ffb05fe531ee
	I0108 12:53:19.906937   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:19.907227   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:19.907234   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:19.907240   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:19.907247   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:19.909358   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:19.909369   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:19.909374   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:19.909380   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:19.909396   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:19.909404   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:19.909410   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:19 GMT
	I0108 12:53:19.909416   10230 round_trippers.go:580]     Audit-Id: c60990e9-6029-46ff-b5d7-da42f950c0ed
	I0108 12:53:19.909472   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:20.401792   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:20.401814   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:20.401827   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:20.401837   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:20.406255   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:20.406267   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:20.406272   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:20.406279   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:20 GMT
	I0108 12:53:20.406286   10230 round_trippers.go:580]     Audit-Id: 7910fbf0-f910-4c71-8de1-eecec1bf70bb
	I0108 12:53:20.406292   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:20.406296   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:20.406301   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:20.406357   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:20.406646   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:20.406652   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:20.406659   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:20.406667   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:20.408621   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:20.408631   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:20.408636   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:20.408642   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:20.408647   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:20.408652   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:20 GMT
	I0108 12:53:20.408657   10230 round_trippers.go:580]     Audit-Id: 1b8378f3-da8f-4413-8140-a971035373ca
	I0108 12:53:20.408662   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:20.408715   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:20.408892   10230 pod_ready.go:102] pod "coredns-565d847f94-f6gqj" in "kube-system" namespace has status "Ready":"False"
	I0108 12:53:20.901666   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:20.901692   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:20.901729   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:20.901740   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:20.905569   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:20.905581   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:20.905587   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:20.905593   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:20.905597   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:20.905602   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:20 GMT
	I0108 12:53:20.905607   10230 round_trippers.go:580]     Audit-Id: 7537b795-bae0-4d32-8762-8cd9df9e46df
	I0108 12:53:20.905612   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:20.905667   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:20.905961   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:20.905969   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:20.905977   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:20.905987   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:20.908220   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:20.908229   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:20.908234   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:20.908239   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:20.908243   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:20 GMT
	I0108 12:53:20.908248   10230 round_trippers.go:580]     Audit-Id: fbac2927-eff4-470f-9ece-c6beb8fa62c3
	I0108 12:53:20.908253   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:20.908258   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:20.908309   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:21.402358   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:21.402382   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:21.402395   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:21.402405   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:21.406134   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:21.406144   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:21.406150   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:21.406154   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:21.406159   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:21 GMT
	I0108 12:53:21.406164   10230 round_trippers.go:580]     Audit-Id: 4d3e4f91-c5da-4cc0-8cfa-ad203450641b
	I0108 12:53:21.406169   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:21.406173   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:21.406477   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:21.406793   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:21.406800   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:21.406806   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:21.406811   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:21.409063   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:21.409074   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:21.409079   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:21.409084   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:21.409089   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:21.409094   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:21.409099   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:21 GMT
	I0108 12:53:21.409104   10230 round_trippers.go:580]     Audit-Id: fa4d06fd-8316-4765-a047-bb3d8c1daffc
	I0108 12:53:21.409153   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:21.900510   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:21.900536   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:21.900548   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:21.900558   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:21.904983   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:21.904997   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:21.905004   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:21.905009   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:21.905015   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:21.905023   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:21 GMT
	I0108 12:53:21.905028   10230 round_trippers.go:580]     Audit-Id: b1da2cbf-e8dc-42de-8532-21fc93d74fb7
	I0108 12:53:21.905033   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:21.905090   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:21.905377   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:21.905383   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:21.905389   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:21.905395   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:21.907375   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:21.907384   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:21.907391   10230 round_trippers.go:580]     Audit-Id: f553963c-3885-484f-9553-045cb801bbfc
	I0108 12:53:21.907396   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:21.907402   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:21.907406   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:21.907411   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:21.907416   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:21 GMT
	I0108 12:53:21.907475   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:22.401280   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:22.401313   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:22.401326   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:22.401336   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:22.405879   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:22.405891   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:22.405897   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:22.405909   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:22.405915   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:22 GMT
	I0108 12:53:22.405920   10230 round_trippers.go:580]     Audit-Id: be29bbfd-000f-4676-8f27-bb553493de52
	I0108 12:53:22.405925   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:22.405930   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:22.405981   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:22.406270   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:22.406277   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:22.406283   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:22.406288   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:22.408044   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:22.408056   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:22.408064   10230 round_trippers.go:580]     Audit-Id: dd10bc20-5a4e-44cc-ae0f-c28640e35646
	I0108 12:53:22.408071   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:22.408079   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:22.408085   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:22.408093   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:22.408107   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:22 GMT
	I0108 12:53:22.408485   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:22.900361   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:22.900387   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:22.900400   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:22.900409   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:22.904191   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:22.904204   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:22.904211   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:22.904216   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:22.904220   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:22.904227   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:22 GMT
	I0108 12:53:22.904231   10230 round_trippers.go:580]     Audit-Id: 48409349-ed45-4594-8ea3-c1a47b7f711b
	I0108 12:53:22.904236   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:22.904601   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:22.904893   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:22.904900   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:22.904906   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:22.904911   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:22.907026   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:22.907035   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:22.907041   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:22.907045   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:22.907050   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:22 GMT
	I0108 12:53:22.907055   10230 round_trippers.go:580]     Audit-Id: 71982930-1743-4668-a28c-8d65a6de135e
	I0108 12:53:22.907060   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:22.907065   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:22.907111   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:22.907301   10230 pod_ready.go:102] pod "coredns-565d847f94-f6gqj" in "kube-system" namespace has status "Ready":"False"
	I0108 12:53:23.401854   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:23.401875   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:23.401888   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:23.401899   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:23.406364   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:23.406380   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:23.406388   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:23.406397   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:23.406405   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:23 GMT
	I0108 12:53:23.406411   10230 round_trippers.go:580]     Audit-Id: 5fc4e16d-5909-457c-b6e3-01c1f3d22272
	I0108 12:53:23.406418   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:23.406425   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:23.406492   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:23.406795   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:23.406802   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:23.406808   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:23.406813   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:23.409009   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:23.409019   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:23.409026   10230 round_trippers.go:580]     Audit-Id: 1789df30-a3a3-4045-90ed-b2c02fe9a947
	I0108 12:53:23.409032   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:23.409037   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:23.409046   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:23.409052   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:23.409057   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:23 GMT
	I0108 12:53:23.409118   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:23.900995   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:23.901021   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:23.901033   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:23.901043   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:23.905417   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:23.905429   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:23.905434   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:23.905440   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:23 GMT
	I0108 12:53:23.905444   10230 round_trippers.go:580]     Audit-Id: ef352e6e-433f-4e5e-a0b2-ae7f0f1512cd
	I0108 12:53:23.905450   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:23.905454   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:23.905459   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:23.905528   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:23.905833   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:23.905839   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:23.905848   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:23.905865   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:23.907989   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:23.907999   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:23.908005   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:23 GMT
	I0108 12:53:23.908010   10230 round_trippers.go:580]     Audit-Id: 3bcb21f7-efbe-48a2-9024-cbf8d08a3aa8
	I0108 12:53:23.908015   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:23.908020   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:23.908025   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:23.908030   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:23.908087   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:24.401859   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:24.401880   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:24.401893   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:24.401902   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:24.406348   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:24.406363   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:24.406371   10230 round_trippers.go:580]     Audit-Id: 4deb5dc1-3554-4742-870b-49b5ac2e115b
	I0108 12:53:24.406378   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:24.406385   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:24.406397   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:24.406405   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:24.406411   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:24 GMT
	I0108 12:53:24.406484   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:24.406841   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:24.406847   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:24.406853   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:24.406858   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:24.408896   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:24.408906   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:24.408911   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:24.408916   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:24.408922   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:24.408926   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:24.408931   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:24 GMT
	I0108 12:53:24.408936   10230 round_trippers.go:580]     Audit-Id: 21a408db-46d7-4fce-8203-d1b49e59b012
	I0108 12:53:24.408983   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:24.902363   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:24.902390   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:24.902402   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:24.902412   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:24.907155   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:24.907169   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:24.907175   10230 round_trippers.go:580]     Audit-Id: 301ecd7a-32cf-4607-aaa8-2e20214ca984
	I0108 12:53:24.907180   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:24.907189   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:24.907195   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:24.907200   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:24.907204   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:24 GMT
	I0108 12:53:24.907261   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:24.907555   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:24.907561   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:24.907567   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:24.907572   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:24.909793   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:24.909802   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:24.909807   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:24.909812   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:24.909817   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:24.909826   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:24.909832   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:24 GMT
	I0108 12:53:24.909836   10230 round_trippers.go:580]     Audit-Id: 3b17ef80-ede1-417e-8892-cfa6dda0a4b4
	I0108 12:53:24.909885   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:24.910079   10230 pod_ready.go:102] pod "coredns-565d847f94-f6gqj" in "kube-system" namespace has status "Ready":"False"
	I0108 12:53:25.400330   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:25.400350   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:25.400362   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:25.400372   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:25.404087   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:25.404115   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:25.404121   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:25.404126   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:25.404132   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:25.404136   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:25.404143   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:25 GMT
	I0108 12:53:25.404152   10230 round_trippers.go:580]     Audit-Id: 1523ac6d-60a0-45d7-b65d-1520214807b7
	I0108 12:53:25.404213   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:25.404501   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:25.404508   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:25.404514   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:25.404519   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:25.406290   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:25.406299   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:25.406305   10230 round_trippers.go:580]     Audit-Id: ed0fba3a-11e2-4bb5-b9e5-971494a0f31c
	I0108 12:53:25.406310   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:25.406317   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:25.406322   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:25.406331   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:25.406338   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:25 GMT
	I0108 12:53:25.406507   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:25.902301   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:25.902330   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:25.902344   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:25.902355   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:25.906309   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:25.906333   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:25.906342   10230 round_trippers.go:580]     Audit-Id: 58482e80-7110-4619-acd3-26f80a47e283
	I0108 12:53:25.906349   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:25.906356   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:25.906362   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:25.906370   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:25.906383   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:25 GMT
	I0108 12:53:25.906511   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:25.906846   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:25.906854   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:25.906860   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:25.906865   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:25.909156   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:25.909166   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:25.909171   10230 round_trippers.go:580]     Audit-Id: 9c9072c5-be39-4190-ad2c-f688615a513c
	I0108 12:53:25.909177   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:25.909182   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:25.909187   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:25.909194   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:25.909200   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:25 GMT
	I0108 12:53:25.909249   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:26.401256   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:26.401279   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:26.401292   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:26.401302   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:26.405668   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:26.405681   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:26.405689   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:26.405696   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:26.405701   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:26 GMT
	I0108 12:53:26.405711   10230 round_trippers.go:580]     Audit-Id: 70df19b6-4cc6-4ad6-9eaa-45888e4ec5f5
	I0108 12:53:26.405717   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:26.405722   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:26.405793   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:26.406104   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:26.406111   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:26.406117   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:26.406122   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:26.408210   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:26.408220   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:26.408226   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:26.408231   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:26.408236   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:26 GMT
	I0108 12:53:26.408241   10230 round_trippers.go:580]     Audit-Id: 187e4b70-80cd-44b0-acf2-36cfbfb4e117
	I0108 12:53:26.408246   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:26.408251   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:26.408302   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:26.900317   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:26.900343   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:26.900358   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:26.900368   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:26.904258   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:26.904268   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:26.904274   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:26.904279   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:26.904284   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:26.904292   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:26 GMT
	I0108 12:53:26.904296   10230 round_trippers.go:580]     Audit-Id: de495b00-b6fe-456f-a6d1-04fc79eb728d
	I0108 12:53:26.904301   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:26.904421   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:26.904719   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:26.904726   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:26.904732   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:26.904737   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:26.906673   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:26.906683   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:26.906689   10230 round_trippers.go:580]     Audit-Id: e932223a-1b1c-443d-900f-822fd50c9bb8
	I0108 12:53:26.906694   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:26.906699   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:26.906704   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:26.906708   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:26.906713   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:26 GMT
	I0108 12:53:26.906949   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:27.400506   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:27.400535   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:27.400549   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:27.400560   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:27.404653   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:27.404668   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:27.404689   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:27.404695   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:27.404699   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:27.404703   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:27 GMT
	I0108 12:53:27.404708   10230 round_trippers.go:580]     Audit-Id: 1f95bf5f-60a9-4fac-a58c-e329fa0675e5
	I0108 12:53:27.404714   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:27.404770   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:27.405062   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:27.405068   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:27.405074   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:27.405079   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:27.407145   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:27.407155   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:27.407161   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:27 GMT
	I0108 12:53:27.407167   10230 round_trippers.go:580]     Audit-Id: 5b403726-ecd5-45c7-a056-437a135b5f72
	I0108 12:53:27.407173   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:27.407178   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:27.407182   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:27.407187   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:27.407247   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:27.407430   10230 pod_ready.go:102] pod "coredns-565d847f94-f6gqj" in "kube-system" namespace has status "Ready":"False"
	I0108 12:53:27.901462   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:27.901487   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:27.901499   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:27.901509   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:27.905797   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:27.905811   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:27.905816   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:27.905826   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:27 GMT
	I0108 12:53:27.905832   10230 round_trippers.go:580]     Audit-Id: cfda5a56-a569-4e52-b8d8-855af286e543
	I0108 12:53:27.905836   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:27.905841   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:27.905847   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:27.905913   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:27.906205   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:27.906212   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:27.906218   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:27.906223   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:27.908189   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:27.908201   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:27.908206   10230 round_trippers.go:580]     Audit-Id: f197df51-ed54-4116-8edf-9115819dea5a
	I0108 12:53:27.908212   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:27.908216   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:27.908222   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:27.908226   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:27.908232   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:27 GMT
	I0108 12:53:27.908294   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:28.402322   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:28.402348   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:28.402360   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:28.402370   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:28.406692   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:28.406708   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:28.406716   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:28.406723   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:28.406729   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:28.406736   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:28 GMT
	I0108 12:53:28.406742   10230 round_trippers.go:580]     Audit-Id: 441cee74-03c6-4231-ba3f-c34f0b4d49db
	I0108 12:53:28.406749   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:28.406823   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:28.407185   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:28.407192   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:28.407200   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:28.407206   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:28.409542   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:28.409552   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:28.409557   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:28.409569   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:28.409575   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:28.409579   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:28 GMT
	I0108 12:53:28.409585   10230 round_trippers.go:580]     Audit-Id: 96932676-d079-4569-a3a7-863cd219b237
	I0108 12:53:28.409590   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:28.409642   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:28.900543   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:28.900572   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:28.900586   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:28.900623   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:28.905189   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:28.905201   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:28.905207   10230 round_trippers.go:580]     Audit-Id: 70370d8c-7e06-4264-80b6-68806ba6c2b0
	I0108 12:53:28.905212   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:28.905217   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:28.905222   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:28.905226   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:28.905231   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:28 GMT
	I0108 12:53:28.905291   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:28.905621   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:28.905628   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:28.905634   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:28.905640   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:28.907795   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:28.907804   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:28.907810   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:28 GMT
	I0108 12:53:28.907815   10230 round_trippers.go:580]     Audit-Id: fcf5e9c1-9041-4ca6-95af-4905f9712653
	I0108 12:53:28.907820   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:28.907825   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:28.907830   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:28.907841   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:28.907927   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:29.402308   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:29.402334   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:29.402347   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:29.402357   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:29.406691   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:29.406704   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:29.406710   10230 round_trippers.go:580]     Audit-Id: 2a77bc9d-8f5a-4c76-bf0f-7e974b383b6a
	I0108 12:53:29.406715   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:29.406731   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:29.406739   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:29.406748   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:29.406754   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:29 GMT
	I0108 12:53:29.406817   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:29.407106   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:29.407112   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:29.407118   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:29.407124   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:29.409277   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:29.409287   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:29.409293   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:29.409299   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:29.409304   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:29.409308   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:29.409314   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:29 GMT
	I0108 12:53:29.409318   10230 round_trippers.go:580]     Audit-Id: 207f303c-463e-452e-bc79-90a410d7c248
	I0108 12:53:29.409379   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:29.409564   10230 pod_ready.go:102] pod "coredns-565d847f94-f6gqj" in "kube-system" namespace has status "Ready":"False"
	I0108 12:53:29.900777   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:29.900807   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:29.900821   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:29.900833   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:29.905021   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:29.905039   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:29.905047   10230 round_trippers.go:580]     Audit-Id: 9b1a72e5-c7ce-459b-8d14-41db8f1057d0
	I0108 12:53:29.905060   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:29.905068   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:29.905074   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:29.905081   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:29.905088   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:29 GMT
	I0108 12:53:29.905180   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:29.905492   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:29.905498   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:29.905506   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:29.905512   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:29.907736   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:29.907745   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:29.907750   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:29.907756   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:29.907762   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:29 GMT
	I0108 12:53:29.907767   10230 round_trippers.go:580]     Audit-Id: 6912b489-e2b5-4b52-b2d3-1f3924665358
	I0108 12:53:29.907772   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:29.907777   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:29.907839   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:30.400604   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:30.400630   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:30.400642   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:30.400652   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:30.404591   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:30.404605   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:30.404611   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:30.404616   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:30.404621   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:30 GMT
	I0108 12:53:30.404625   10230 round_trippers.go:580]     Audit-Id: bb29b27f-54fc-4d4c-9ead-4f99b5bc2320
	I0108 12:53:30.404631   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:30.404636   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:30.404724   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:30.405048   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:30.405055   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:30.405061   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:30.405066   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:30.407081   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:30.407090   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:30.407097   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:30.407103   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:30.407108   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:30.407113   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:30 GMT
	I0108 12:53:30.407118   10230 round_trippers.go:580]     Audit-Id: 62d876d2-4387-44b6-b623-4e6a6e00fdcc
	I0108 12:53:30.407123   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:30.407180   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:30.900565   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:30.900592   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:30.900605   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:30.900615   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:30.905097   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:30.905110   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:30.905116   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:30.905120   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:30.905124   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:30.905129   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:30.905133   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:30 GMT
	I0108 12:53:30.905137   10230 round_trippers.go:580]     Audit-Id: cc113f36-ef16-4e59-8e55-c8935a20396f
	I0108 12:53:30.905205   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:30.905497   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:30.905503   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:30.905509   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:30.905514   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:30.907583   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:30.907592   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:30.907598   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:30.907603   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:30 GMT
	I0108 12:53:30.907608   10230 round_trippers.go:580]     Audit-Id: 267c3cdc-e948-4d55-a666-a37c8819207f
	I0108 12:53:30.907612   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:30.907620   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:30.907625   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:30.907697   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:31.402368   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:31.402390   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:31.402403   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:31.402414   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:31.406695   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:31.406712   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:31.406720   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:31 GMT
	I0108 12:53:31.406766   10230 round_trippers.go:580]     Audit-Id: 9905734c-bfb8-4520-b7c5-81f2c15194d8
	I0108 12:53:31.406775   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:31.406781   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:31.406802   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:31.406807   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:31.406872   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:31.407192   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:31.407198   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:31.407204   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:31.407210   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:31.409118   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:31.409128   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:31.409134   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:31.409139   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:31.409144   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:31 GMT
	I0108 12:53:31.409149   10230 round_trippers.go:580]     Audit-Id: d51ed3c5-d662-4e68-a68f-7bfd2295fb35
	I0108 12:53:31.409154   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:31.409159   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:31.409227   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:31.900785   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:31.900811   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:31.900824   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:31.900885   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:31.904694   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:31.904711   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:31.904719   10230 round_trippers.go:580]     Audit-Id: b19c7977-5603-4ef0-b8cf-e91fd1609d10
	I0108 12:53:31.904732   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:31.904740   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:31.904746   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:31.904752   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:31.904759   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:31 GMT
	I0108 12:53:31.904977   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:31.905290   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:31.905297   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:31.905303   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:31.905309   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:31.907376   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:31.907385   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:31.907390   10230 round_trippers.go:580]     Audit-Id: 1f420ce4-148f-496d-98fe-a7d5389adfac
	I0108 12:53:31.907395   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:31.907400   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:31.907405   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:31.907410   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:31.907414   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:31 GMT
	I0108 12:53:31.907756   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:31.908204   10230 pod_ready.go:102] pod "coredns-565d847f94-f6gqj" in "kube-system" namespace has status "Ready":"False"
	I0108 12:53:32.400252   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:32.400278   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:32.400290   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:32.400300   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:32.404077   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:32.404092   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:32.404098   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:32.404104   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:32.404109   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:32.404114   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:32.404119   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:32 GMT
	I0108 12:53:32.404124   10230 round_trippers.go:580]     Audit-Id: 27a826e0-dc84-4ceb-ad5c-0f04906eb3a5
	I0108 12:53:32.404183   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:32.404478   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:32.404486   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:32.404492   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:32.404497   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:32.406895   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:32.406905   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:32.406911   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:32.406916   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:32.406922   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:32.406927   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:32 GMT
	I0108 12:53:32.406932   10230 round_trippers.go:580]     Audit-Id: 0e056654-e8e7-4502-ad34-3cf88df1b44d
	I0108 12:53:32.406936   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:32.406984   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:32.900689   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:32.900714   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:32.900727   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:32.900737   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:32.904928   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:32.904951   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:32.904962   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:32 GMT
	I0108 12:53:32.904971   10230 round_trippers.go:580]     Audit-Id: 01bd709a-c4b2-44e3-93f0-47c85e8686d4
	I0108 12:53:32.904980   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:32.904989   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:32.904996   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:32.905003   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:32.905138   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:32.905462   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:32.905469   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:32.905475   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:32.905481   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:32.907362   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:32.907396   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:32.907406   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:32.907411   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:32.907419   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:32.907423   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:32.907428   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:32 GMT
	I0108 12:53:32.907434   10230 round_trippers.go:580]     Audit-Id: d467aab8-463b-4fc9-b7a4-c47423207dc4
	I0108 12:53:32.907495   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:33.402248   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:33.402270   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:33.402293   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:33.402304   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:33.406679   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:33.406691   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:33.406697   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:33.406702   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:33.406708   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:33.406713   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:33.406718   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:33 GMT
	I0108 12:53:33.406722   10230 round_trippers.go:580]     Audit-Id: 68f86963-55bb-4209-9ca5-720a5aed892e
	I0108 12:53:33.406795   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:33.407087   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:33.407093   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:33.407100   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:33.407105   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:33.408965   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:33.408973   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:33.408979   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:33.408984   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:33.408989   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:33 GMT
	I0108 12:53:33.408994   10230 round_trippers.go:580]     Audit-Id: 4575536c-5f84-4939-a93d-7cd5ce1e9fcc
	I0108 12:53:33.408999   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:33.409004   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:33.409055   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:33.901797   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:33.901825   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:33.901840   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:33.901850   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:33.906154   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:33.906168   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:33.906174   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:33.906179   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:33.906184   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:33.906189   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:33.906193   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:33 GMT
	I0108 12:53:33.906198   10230 round_trippers.go:580]     Audit-Id: 7a1f660e-e7a1-4406-ab6c-94eed0b34f9d
	I0108 12:53:33.906273   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:33.906566   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:33.906573   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:33.906579   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:33.906584   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:33.908556   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:33.908567   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:33.908572   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:33.908577   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:33.908583   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:33 GMT
	I0108 12:53:33.908588   10230 round_trippers.go:580]     Audit-Id: 74e17cee-66f4-4794-aa84-1adf06c31bfc
	I0108 12:53:33.908593   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:33.908599   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:33.908696   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:33.908882   10230 pod_ready.go:102] pod "coredns-565d847f94-f6gqj" in "kube-system" namespace has status "Ready":"False"
	I0108 12:53:34.400409   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:34.400432   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:34.400445   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:34.400455   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:34.404611   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:34.404624   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:34.404635   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:34.404640   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:34.404645   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:34 GMT
	I0108 12:53:34.404650   10230 round_trippers.go:580]     Audit-Id: 40421b1a-f445-45a1-8698-2a8fc0f33285
	I0108 12:53:34.404655   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:34.404660   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:34.404706   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:34.404990   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:34.404997   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:34.405003   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:34.405015   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:34.407210   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:34.407220   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:34.407225   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:34.407231   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:34.407236   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:34.407241   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:34.407246   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:34 GMT
	I0108 12:53:34.407250   10230 round_trippers.go:580]     Audit-Id: 4e60496b-3ec9-4a1f-a1af-6d7448914c00
	I0108 12:53:34.407302   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:34.902053   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:34.902081   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:34.902094   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:34.902106   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:34.906369   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:34.906385   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:34.906392   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:34.906399   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:34.906405   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:34 GMT
	I0108 12:53:34.906413   10230 round_trippers.go:580]     Audit-Id: 7cd289e8-d4f0-43a4-b3a1-58b98bcfaf92
	I0108 12:53:34.906419   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:34.906425   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:34.906488   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:34.906826   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:34.906833   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:34.906839   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:34.906844   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:34.909205   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:34.909213   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:34.909218   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:34.909223   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:34.909228   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:34.909233   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:34.909238   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:34 GMT
	I0108 12:53:34.909243   10230 round_trippers.go:580]     Audit-Id: b2c5e9d0-7d84-4442-8ad0-a827e1f5e4ae
	I0108 12:53:34.909294   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:35.402214   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:35.402242   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:35.402255   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:35.402265   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:35.407950   10230 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 12:53:35.407963   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:35.407968   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:35.407973   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:35.407978   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:35.407983   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:35 GMT
	I0108 12:53:35.407989   10230 round_trippers.go:580]     Audit-Id: 15c175c0-55c6-41d7-b60a-9ea168af00a0
	I0108 12:53:35.407994   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:35.408059   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:35.408341   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:35.408347   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:35.408353   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:35.408358   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:35.410777   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:35.410787   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:35.410793   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:35.410797   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:35.410803   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:35 GMT
	I0108 12:53:35.410807   10230 round_trippers.go:580]     Audit-Id: 99e0a46c-5796-4210-b4cf-f72b72d0c76e
	I0108 12:53:35.410812   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:35.410823   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:35.410866   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:35.900972   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:35.900999   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:35.901012   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:35.901021   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:35.905540   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:35.905552   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:35.905558   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:35.905563   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:35.905571   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:35.905591   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:35 GMT
	I0108 12:53:35.905603   10230 round_trippers.go:580]     Audit-Id: daa3756c-116c-4fc6-93de-32b42589546a
	I0108 12:53:35.905612   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:35.905683   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:35.905966   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:35.905973   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:35.905979   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:35.905985   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:35.908178   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:35.908187   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:35.908193   10230 round_trippers.go:580]     Audit-Id: b64625c4-ec5a-4826-86e5-363d015d8b56
	I0108 12:53:35.908199   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:35.908203   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:35.908208   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:35.908213   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:35.908218   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:35 GMT
	I0108 12:53:35.908260   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:36.400154   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:36.400177   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:36.400191   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:36.400201   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:36.404156   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:36.404168   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:36.404174   10230 round_trippers.go:580]     Audit-Id: a69e0494-8932-4573-bd1b-d256c2a4d5bb
	I0108 12:53:36.404184   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:36.404190   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:36.404194   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:36.404199   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:36.404204   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:36 GMT
	I0108 12:53:36.404244   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:36.404538   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:36.404545   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:36.404551   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:36.404556   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:36.406522   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:36.406531   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:36.406537   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:36.406542   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:36.406547   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:36.406552   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:36 GMT
	I0108 12:53:36.406557   10230 round_trippers.go:580]     Audit-Id: 1e159bb2-7c02-49ea-b773-abf7fd71b954
	I0108 12:53:36.406561   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:36.406602   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:36.406775   10230 pod_ready.go:102] pod "coredns-565d847f94-f6gqj" in "kube-system" namespace has status "Ready":"False"
	I0108 12:53:36.900794   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:36.900819   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:36.900832   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:36.900875   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:36.905007   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:36.905020   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:36.905029   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:36 GMT
	I0108 12:53:36.905036   10230 round_trippers.go:580]     Audit-Id: ae061a2a-8fd7-4196-8c82-0ff5ce057262
	I0108 12:53:36.905041   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:36.905046   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:36.905050   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:36.905055   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:36.905178   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:36.905479   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:36.905487   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:36.905494   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:36.905499   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:36.907643   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:36.907655   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:36.907662   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:36.907668   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:36.907674   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:36 GMT
	I0108 12:53:36.907679   10230 round_trippers.go:580]     Audit-Id: 6c18c6b2-f6fb-4699-bd17-c3ffe88c9c1d
	I0108 12:53:36.907684   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:36.907689   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:36.907798   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:37.400376   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:37.400389   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:37.400396   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:37.400401   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:37.403179   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:37.403189   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:37.403194   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:37 GMT
	I0108 12:53:37.403199   10230 round_trippers.go:580]     Audit-Id: a82c387e-7a7c-4d8e-9714-e93d04052de5
	I0108 12:53:37.403204   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:37.403211   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:37.403216   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:37.403222   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:37.403371   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:37.403649   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:37.403655   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:37.403661   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:37.403667   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:37.405779   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:37.405788   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:37.405793   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:37.405798   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:37.405803   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:37.405808   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:37 GMT
	I0108 12:53:37.405813   10230 round_trippers.go:580]     Audit-Id: 6ba26cba-1a4c-414d-bddb-a5016d70656d
	I0108 12:53:37.405818   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:37.405861   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:37.902182   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:37.902209   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:37.902221   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:37.902231   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:37.906328   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:37.906344   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:37.906351   10230 round_trippers.go:580]     Audit-Id: 96ceaeb9-0464-486e-96b7-1967a2de9ffb
	I0108 12:53:37.906359   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:37.906366   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:37.906372   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:37.906379   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:37.906386   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:37 GMT
	I0108 12:53:37.906443   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:37.906832   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:37.906848   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:37.906861   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:37.906870   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:37.909075   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:37.909084   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:37.909090   10230 round_trippers.go:580]     Audit-Id: e526a7e8-9132-470f-9768-2c1b3326ef4a
	I0108 12:53:37.909097   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:37.909103   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:37.909108   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:37.909112   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:37.909118   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:37 GMT
	I0108 12:53:37.909158   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:38.400935   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:38.400958   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:38.400971   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:38.400981   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:38.405556   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:38.405570   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:38.405575   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:38 GMT
	I0108 12:53:38.405580   10230 round_trippers.go:580]     Audit-Id: 2cee152e-027d-48d2-9731-73c114809b15
	I0108 12:53:38.405585   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:38.405590   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:38.405594   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:38.405599   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:38.405643   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:38.405926   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:38.405933   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:38.405939   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:38.405945   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:38.408013   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:38.408021   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:38.408027   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:38.408032   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:38.408037   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:38.408041   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:38 GMT
	I0108 12:53:38.408047   10230 round_trippers.go:580]     Audit-Id: 8f0b62a2-7f58-47b5-86eb-f5b7c9e9da00
	I0108 12:53:38.408051   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:38.408278   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:38.408461   10230 pod_ready.go:102] pod "coredns-565d847f94-f6gqj" in "kube-system" namespace has status "Ready":"False"
	I0108 12:53:38.901498   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:38.901523   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:38.901536   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:38.901546   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:38.906016   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:38.906034   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:38.906041   10230 round_trippers.go:580]     Audit-Id: 3b0a0f4f-1cd8-4aaa-8b45-20414057729a
	I0108 12:53:38.906046   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:38.906050   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:38.906056   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:38.906060   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:38.906066   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:38 GMT
	I0108 12:53:38.906116   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:38.906411   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:38.906419   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:38.906425   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:38.906434   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:38.908573   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:38.908582   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:38.908587   10230 round_trippers.go:580]     Audit-Id: 6e078755-f6e0-469c-8ab8-9f439f630b2a
	I0108 12:53:38.908591   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:38.908596   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:38.908601   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:38.908606   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:38.908612   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:38 GMT
	I0108 12:53:38.908654   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:39.401613   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:39.401635   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:39.401648   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:39.401658   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:39.405811   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:39.405826   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:39.405834   10230 round_trippers.go:580]     Audit-Id: b0bef3d4-de30-4c19-9c8c-c6f17140861b
	I0108 12:53:39.405841   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:39.405848   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:39.405854   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:39.405860   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:39.405868   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:39 GMT
	I0108 12:53:39.405922   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"704","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6781 chars]
	I0108 12:53:39.406245   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:39.406252   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:39.406259   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:39.406265   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:39.408311   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:39.408320   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:39.408326   10230 round_trippers.go:580]     Audit-Id: 31f9f27e-fc5b-4822-8df7-efac85e9a5a2
	I0108 12:53:39.408331   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:39.408336   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:39.408341   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:39.408346   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:39.408351   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:39 GMT
	I0108 12:53:39.408394   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:39.900415   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:39.900441   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:39.900453   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:39.900463   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:39.905021   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:39.905033   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:39.905039   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:39.905044   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:39.905049   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:39 GMT
	I0108 12:53:39.905053   10230 round_trippers.go:580]     Audit-Id: ca7d8f3f-7e7d-4334-b808-bc4758934825
	I0108 12:53:39.905058   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:39.905076   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:39.905129   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"799","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6552 chars]
	I0108 12:53:39.905407   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:39.905414   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:39.905420   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:39.905425   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:39.907745   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:39.907755   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:39.907761   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:39.907766   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:39.907771   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:39.907777   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:39.907782   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:39 GMT
	I0108 12:53:39.907787   10230 round_trippers.go:580]     Audit-Id: 139d5f1c-ca50-4027-b022-fb51f0c34374
	I0108 12:53:39.907829   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:39.908007   10230 pod_ready.go:92] pod "coredns-565d847f94-f6gqj" in "kube-system" namespace has status "Ready":"True"
	I0108 12:53:39.908017   10230 pod_ready.go:81] duration metric: took 35.013428327s waiting for pod "coredns-565d847f94-f6gqj" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:39.908025   10230 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-124908" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:39.908052   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/etcd-multinode-124908
	I0108 12:53:39.908057   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:39.908063   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:39.908069   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:39.909982   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:39.909991   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:39.909996   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:39.910000   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:39.910006   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:39.910011   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:39 GMT
	I0108 12:53:39.910016   10230 round_trippers.go:580]     Audit-Id: 929ff0ce-2e47-41ea-af1d-0d8d3e9f78d3
	I0108 12:53:39.910021   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:39.910198   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-124908","namespace":"kube-system","uid":"9cf1a608-48d9-453e-bd35-263521e756e4","resourceVersion":"742","creationTimestamp":"2023-01-08T20:49:35Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"83cad18480e9029408294e1fc4223245","kubernetes.io/config.mirror":"83cad18480e9029408294e1fc4223245","kubernetes.io/config.seen":"2023-01-08T20:49:35.642390520Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6045 chars]
	I0108 12:53:39.910412   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:39.910419   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:39.910425   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:39.910430   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:39.912602   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:39.912612   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:39.912617   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:39.912623   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:39.912628   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:39.912633   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:39.912638   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:39 GMT
	I0108 12:53:39.912643   10230 round_trippers.go:580]     Audit-Id: 8f3b5930-6eb7-44e4-85dd-3d7b9e59997d
	I0108 12:53:39.912686   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:39.912857   10230 pod_ready.go:92] pod "etcd-multinode-124908" in "kube-system" namespace has status "Ready":"True"
	I0108 12:53:39.912864   10230 pod_ready.go:81] duration metric: took 4.833522ms waiting for pod "etcd-multinode-124908" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:39.912874   10230 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-124908" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:39.912898   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-124908
	I0108 12:53:39.912903   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:39.912909   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:39.912914   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:39.914700   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:39.914708   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:39.914714   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:39 GMT
	I0108 12:53:39.914720   10230 round_trippers.go:580]     Audit-Id: 5940fd87-5eb4-48d5-b97c-159d73e1ddd1
	I0108 12:53:39.914725   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:39.914730   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:39.914735   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:39.914740   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:39.914783   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-124908","namespace":"kube-system","uid":"7e7e7fa5-c965-4737-83b1-afd48eb87547","resourceVersion":"779","creationTimestamp":"2023-01-08T20:49:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"7e3bdd07923da057548f2016d7097374","kubernetes.io/config.mirror":"7e3bdd07923da057548f2016d7097374","kubernetes.io/config.seen":"2023-01-08T20:49:35.642400230Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8429 chars]
	I0108 12:53:39.915025   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:39.915031   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:39.915037   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:39.915042   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:39.917208   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:39.917217   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:39.917223   10230 round_trippers.go:580]     Audit-Id: e3eeb255-3631-4b52-93ec-42619046ee39
	I0108 12:53:39.917229   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:39.917234   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:39.917240   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:39.917245   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:39.917250   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:39 GMT
	I0108 12:53:39.917283   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:39.917452   10230 pod_ready.go:92] pod "kube-apiserver-multinode-124908" in "kube-system" namespace has status "Ready":"True"
	I0108 12:53:39.917458   10230 pod_ready.go:81] duration metric: took 4.579502ms waiting for pod "kube-apiserver-multinode-124908" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:39.917464   10230 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-124908" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:39.917489   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-124908
	I0108 12:53:39.917493   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:39.917499   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:39.917505   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:39.919541   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:39.919550   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:39.919556   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:39.919561   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:39.919566   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:39.919571   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:39.919576   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:39 GMT
	I0108 12:53:39.919581   10230 round_trippers.go:580]     Audit-Id: 1ca36a59-2bf7-49fe-b32d-68df43c21004
	I0108 12:53:39.919647   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-124908","namespace":"kube-system","uid":"41ff8cf2-6b35-47c2-8f48-120e6adf98bb","resourceVersion":"763","creationTimestamp":"2023-01-08T20:49:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d5faaebc8229ee8bf257c9d1c46ead3c","kubernetes.io/config.mirror":"d5faaebc8229ee8bf257c9d1c46ead3c","kubernetes.io/config.seen":"2023-01-08T20:49:35.642401085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8002 chars]
	I0108 12:53:39.919900   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:39.919907   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:39.919912   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:39.919918   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:39.921869   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:39.921877   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:39.921883   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:39.921888   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:39.921893   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:39.921898   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:39 GMT
	I0108 12:53:39.921903   10230 round_trippers.go:580]     Audit-Id: c206783d-4aee-4044-8b7a-72748141441a
	I0108 12:53:39.921908   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:39.921950   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:39.922117   10230 pod_ready.go:92] pod "kube-controller-manager-multinode-124908" in "kube-system" namespace has status "Ready":"True"
	I0108 12:53:39.922123   10230 pod_ready.go:81] duration metric: took 4.654475ms waiting for pod "kube-controller-manager-multinode-124908" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:39.922130   10230 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hq6ms" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:39.922153   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-proxy-hq6ms
	I0108 12:53:39.922157   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:39.922163   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:39.922169   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:39.924039   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:39.924050   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:39.924055   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:39 GMT
	I0108 12:53:39.924060   10230 round_trippers.go:580]     Audit-Id: 5eb736fb-5c2f-4264-9215-1f9c6cf5eafc
	I0108 12:53:39.924066   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:39.924072   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:39.924078   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:39.924083   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:39.924200   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hq6ms","generateName":"kube-proxy-","namespace":"kube-system","uid":"3deaa832-bac0-47e3-bdef-482b094bf90f","resourceVersion":"669","creationTimestamp":"2023-01-08T20:51:09Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ceebf5ed-bacc-4cbe-87e3-48c583ee7679","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:51:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ceebf5ed-bacc-4cbe-87e3-48c583ee7679\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5743 chars]
	I0108 12:53:39.924424   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908-m03
	I0108 12:53:39.924430   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:39.924436   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:39.924442   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:39.926155   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:39.926163   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:39.926168   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:39.926174   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:39.926179   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:39.926184   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:39 GMT
	I0108 12:53:39.926189   10230 round_trippers.go:580]     Audit-Id: 5a8bffc7-37c1-43be-833c-a4a9701f0551
	I0108 12:53:39.926193   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:39.926228   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908-m03","uid":"00d677bd-1b22-4d63-8258-31e7e0d73f15","resourceVersion":"756","creationTimestamp":"2023-01-08T20:51:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:51:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:51:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 4321 chars]
	I0108 12:53:39.926376   10230 pod_ready.go:92] pod "kube-proxy-hq6ms" in "kube-system" namespace has status "Ready":"True"
	I0108 12:53:39.926382   10230 pod_ready.go:81] duration metric: took 4.247857ms waiting for pod "kube-proxy-hq6ms" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:39.926387   10230 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kzv6k" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:40.100709   10230 request.go:614] Waited for 174.201687ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-proxy-kzv6k
	I0108 12:53:40.100756   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-proxy-kzv6k
	I0108 12:53:40.100765   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:40.100778   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:40.100793   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:40.104855   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:40.104873   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:40.104884   10230 round_trippers.go:580]     Audit-Id: 70876165-774a-4af2-9101-51aa2bd6cb4a
	I0108 12:53:40.104893   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:40.104899   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:40.104918   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:40.104928   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:40.104934   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:40 GMT
	I0108 12:53:40.105152   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kzv6k","generateName":"kube-proxy-","namespace":"kube-system","uid":"05a4b261-aa83-4e23-83c6-0a50d659b5b7","resourceVersion":"705","creationTimestamp":"2023-01-08T20:49:47Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ceebf5ed-bacc-4cbe-87e3-48c583ee7679","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ceebf5ed-bacc-4cbe-87e3-48c583ee7679\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5735 chars]
	I0108 12:53:40.300810   10230 request.go:614] Waited for 195.309578ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:40.300857   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:40.300865   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:40.300877   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:40.300891   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:40.305027   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:40.305043   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:40.305051   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:40.305057   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:40.305064   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:40.305071   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:40.305077   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:40 GMT
	I0108 12:53:40.305084   10230 round_trippers.go:580]     Audit-Id: f205480e-b823-4e1a-9974-49498e281dc4
	I0108 12:53:40.305146   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:40.305393   10230 pod_ready.go:92] pod "kube-proxy-kzv6k" in "kube-system" namespace has status "Ready":"True"
	I0108 12:53:40.305401   10230 pod_ready.go:81] duration metric: took 379.012876ms waiting for pod "kube-proxy-kzv6k" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:40.305407   10230 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vx6bb" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:40.500423   10230 request.go:614] Waited for 194.974989ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-proxy-vx6bb
	I0108 12:53:40.500474   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-proxy-vx6bb
	I0108 12:53:40.500484   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:40.500527   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:40.500541   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:40.504463   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:40.504475   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:40.504480   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:40.504486   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:40.504491   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:40 GMT
	I0108 12:53:40.504496   10230 round_trippers.go:580]     Audit-Id: e4bdaf27-28e7-4a88-8a67-367a28d94b6f
	I0108 12:53:40.504501   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:40.504505   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:40.504560   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vx6bb","generateName":"kube-proxy-","namespace":"kube-system","uid":"7bff7041-dbf7-4143-9f70-52a12dd69f64","resourceVersion":"467","creationTimestamp":"2023-01-08T20:50:25Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ceebf5ed-bacc-4cbe-87e3-48c583ee7679","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:50:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ceebf5ed-bacc-4cbe-87e3-48c583ee7679\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5536 chars]
	I0108 12:53:40.700980   10230 request.go:614] Waited for 196.055266ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/nodes/multinode-124908-m02
	I0108 12:53:40.701033   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908-m02
	I0108 12:53:40.701041   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:40.701055   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:40.701069   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:40.705141   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:40.705156   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:40.705164   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:40.705171   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:40 GMT
	I0108 12:53:40.705177   10230 round_trippers.go:580]     Audit-Id: e07f4dad-e953-4aea-901b-09a4dcaadc47
	I0108 12:53:40.705184   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:40.705191   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:40.705198   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:40.705259   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908-m02","uid":"06778a45-7a2c-401b-918a-d4864150c87c","resourceVersion":"587","creationTimestamp":"2023-01-08T20:50:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:50:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4504 chars]
	I0108 12:53:40.705476   10230 pod_ready.go:92] pod "kube-proxy-vx6bb" in "kube-system" namespace has status "Ready":"True"
	I0108 12:53:40.705483   10230 pod_ready.go:81] duration metric: took 400.076367ms waiting for pod "kube-proxy-vx6bb" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:40.705490   10230 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-124908" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:40.900487   10230 request.go:614] Waited for 194.956523ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-124908
	I0108 12:53:40.900538   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-124908
	I0108 12:53:40.900546   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:40.900590   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:40.900611   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:40.904855   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:40.904882   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:40.904890   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:40.904898   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:40 GMT
	I0108 12:53:40.904905   10230 round_trippers.go:580]     Audit-Id: 8226953c-70d7-4e9e-a22b-8e5bb441aa2b
	I0108 12:53:40.904912   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:40.904919   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:40.904926   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:40.905001   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-124908","namespace":"kube-system","uid":"3dd0df78-6cad-4b47-a66f-74c412846b79","resourceVersion":"775","creationTimestamp":"2023-01-08T20:49:35Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"165a046b58d2e71b3de2a638cd49c0fb","kubernetes.io/config.mirror":"165a046b58d2e71b3de2a638cd49c0fb","kubernetes.io/config.seen":"2023-01-08T20:49:35.642401740Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4886 chars]
	I0108 12:53:41.101227   10230 request.go:614] Waited for 195.942208ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:41.101279   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:41.101318   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:41.101335   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:41.101368   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:41.106194   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:41.106210   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:41.106217   10230 round_trippers.go:580]     Audit-Id: f6a698be-b446-42e2-ae8f-284dde2ec675
	I0108 12:53:41.106224   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:41.106231   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:41.106241   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:41.106248   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:41.106256   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:41 GMT
	I0108 12:53:41.106331   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:41.106604   10230 pod_ready.go:92] pod "kube-scheduler-multinode-124908" in "kube-system" namespace has status "Ready":"True"
	I0108 12:53:41.106614   10230 pod_ready.go:81] duration metric: took 401.124729ms waiting for pod "kube-scheduler-multinode-124908" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:41.106623   10230 pod_ready.go:38] duration metric: took 36.220178758s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 12:53:41.106637   10230 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 12:53:41.115391   10230 command_runner.go:130] > -16
	I0108 12:53:41.115410   10230 ops.go:34] apiserver oom_adj: -16
	I0108 12:53:41.115416   10230 kubeadm.go:631] restartCluster took 47.698232771s
	I0108 12:53:41.115420   10230 kubeadm.go:398] StartCluster complete in 47.729216191s
	I0108 12:53:41.115433   10230 settings.go:142] acquiring lock: {Name:mkc40aeb9f069e96cc5c51255984662f0292a058 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 12:53:41.115513   10230 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15565-2761/kubeconfig
	I0108 12:53:41.115873   10230 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/kubeconfig: {Name:mk71550ab701dee908d8134473648649a6392238 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 12:53:41.116248   10230 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15565-2761/kubeconfig
	I0108 12:53:41.116413   10230 kapi.go:59] client config for multinode-124908: &rest.Config{Host:"https://127.0.0.1:51399", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 12:53:41.116611   10230 round_trippers.go:463] GET https://127.0.0.1:51399/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 12:53:41.116618   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:41.116624   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:41.116630   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:41.119172   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:41.119182   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:41.119188   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:41.119193   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:41.119198   10230 round_trippers.go:580]     Content-Length: 291
	I0108 12:53:41.119203   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:41 GMT
	I0108 12:53:41.119208   10230 round_trippers.go:580]     Audit-Id: 0a158930-6b2a-4180-92e2-c79cf87322d4
	I0108 12:53:41.119212   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:41.119218   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:41.119231   10230 request.go:1154] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"810f231a-a12d-46cc-94f1-efc567a0161a","resourceVersion":"803","creationTimestamp":"2023-01-08T20:49:35Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0108 12:53:41.119319   10230 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-124908" rescaled to 1
	I0108 12:53:41.119346   10230 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0108 12:53:41.119353   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 12:53:41.119379   10230 addons.go:486] enableAddons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
	I0108 12:53:41.119557   10230 config.go:180] Loaded profile config "multinode-124908": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 12:53:41.160646   10230 addons.go:65] Setting storage-provisioner=true in profile "multinode-124908"
	I0108 12:53:41.160589   10230 out.go:177] * Verifying Kubernetes components...
	I0108 12:53:41.160649   10230 addons.go:65] Setting default-storageclass=true in profile "multinode-124908"
	I0108 12:53:41.160680   10230 addons.go:227] Setting addon storage-provisioner=true in "multinode-124908"
	I0108 12:53:41.181798   10230 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-124908"
	W0108 12:53:41.181808   10230 addons.go:236] addon storage-provisioner should already be in state true
	I0108 12:53:41.181815   10230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 12:53:41.175293   10230 command_runner.go:130] > apiVersion: v1
	I0108 12:53:41.181834   10230 command_runner.go:130] > data:
	I0108 12:53:41.181842   10230 command_runner.go:130] >   Corefile: |
	I0108 12:53:41.181846   10230 command_runner.go:130] >     .:53 {
	I0108 12:53:41.181851   10230 command_runner.go:130] >         errors
	I0108 12:53:41.181856   10230 command_runner.go:130] >         health {
	I0108 12:53:41.181857   10230 host.go:66] Checking if "multinode-124908" exists ...
	I0108 12:53:41.181869   10230 command_runner.go:130] >            lameduck 5s
	I0108 12:53:41.181873   10230 command_runner.go:130] >         }
	I0108 12:53:41.181876   10230 command_runner.go:130] >         ready
	I0108 12:53:41.181882   10230 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0108 12:53:41.181887   10230 command_runner.go:130] >            pods insecure
	I0108 12:53:41.181908   10230 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0108 12:53:41.181916   10230 command_runner.go:130] >            ttl 30
	I0108 12:53:41.181923   10230 command_runner.go:130] >         }
	I0108 12:53:41.181942   10230 command_runner.go:130] >         prometheus :9153
	I0108 12:53:41.181959   10230 command_runner.go:130] >         hosts {
	I0108 12:53:41.181964   10230 command_runner.go:130] >            192.168.65.2 host.minikube.internal
	I0108 12:53:41.181968   10230 command_runner.go:130] >            fallthrough
	I0108 12:53:41.181975   10230 command_runner.go:130] >         }
	I0108 12:53:41.181980   10230 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0108 12:53:41.181984   10230 command_runner.go:130] >            max_concurrent 1000
	I0108 12:53:41.181988   10230 command_runner.go:130] >         }
	I0108 12:53:41.181991   10230 command_runner.go:130] >         cache 30
	I0108 12:53:41.181995   10230 command_runner.go:130] >         loop
	I0108 12:53:41.181999   10230 command_runner.go:130] >         reload
	I0108 12:53:41.182003   10230 command_runner.go:130] >         loadbalance
	I0108 12:53:41.182006   10230 command_runner.go:130] >     }
	I0108 12:53:41.182010   10230 command_runner.go:130] > kind: ConfigMap
	I0108 12:53:41.182013   10230 command_runner.go:130] > metadata:
	I0108 12:53:41.182017   10230 command_runner.go:130] >   creationTimestamp: "2023-01-08T20:49:35Z"
	I0108 12:53:41.182020   10230 command_runner.go:130] >   name: coredns
	I0108 12:53:41.182024   10230 command_runner.go:130] >   namespace: kube-system
	I0108 12:53:41.182027   10230 command_runner.go:130] >   resourceVersion: "367"
	I0108 12:53:41.182031   10230 command_runner.go:130] >   uid: 42630cd3-ff72-40ae-bd48-b7a868baf4b9
	I0108 12:53:41.182113   10230 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0108 12:53:41.182134   10230 cli_runner.go:164] Run: docker container inspect multinode-124908 --format={{.State.Status}}
	I0108 12:53:41.182199   10230 cli_runner.go:164] Run: docker container inspect multinode-124908 --format={{.State.Status}}
	I0108 12:53:41.193197   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-124908
	I0108 12:53:41.249743   10230 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15565-2761/kubeconfig
	I0108 12:53:41.271021   10230 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 12:53:41.271307   10230 kapi.go:59] client config for multinode-124908: &rest.Config{Host:"https://127.0.0.1:51399", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 12:53:41.291866   10230 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 12:53:41.291884   10230 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 12:53:41.292021   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908
	I0108 12:53:41.292185   10230 round_trippers.go:463] GET https://127.0.0.1:51399/apis/storage.k8s.io/v1/storageclasses
	I0108 12:53:41.292198   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:41.292211   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:41.292248   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:41.297081   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:41.297107   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:41.297116   10230 round_trippers.go:580]     Audit-Id: 1428949f-b14e-4e70-b573-42b12a95cf1a
	I0108 12:53:41.297123   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:41.297130   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:41.297135   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:41.297140   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:41.297145   10230 round_trippers.go:580]     Content-Length: 1273
	I0108 12:53:41.297149   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:41 GMT
	I0108 12:53:41.297217   10230 request.go:1154] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"803"},"items":[{"metadata":{"name":"standard","uid":"b0361e2f-3ac8-4575-88dc-aebe0c85a19d","resourceVersion":"376","creationTimestamp":"2023-01-08T20:49:50Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-01-08T20:49:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0108 12:53:41.297637   10230 request.go:1154] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b0361e2f-3ac8-4575-88dc-aebe0c85a19d","resourceVersion":"376","creationTimestamp":"2023-01-08T20:49:50Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-01-08T20:49:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0108 12:53:41.297674   10230 round_trippers.go:463] PUT https://127.0.0.1:51399/apis/storage.k8s.io/v1/storageclasses/standard
	I0108 12:53:41.297679   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:41.297685   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:41.297690   10230 round_trippers.go:473]     Content-Type: application/json
	I0108 12:53:41.297696   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:41.299832   10230 node_ready.go:35] waiting up to 6m0s for node "multinode-124908" to be "Ready" ...
	I0108 12:53:41.300442   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:41.300456   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:41.300477   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:41.300489   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:41.301281   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:41.301291   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:41.301297   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:41.301303   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:41.301309   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:41.301314   10230 round_trippers.go:580]     Content-Length: 1220
	I0108 12:53:41.301319   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:41 GMT
	I0108 12:53:41.301324   10230 round_trippers.go:580]     Audit-Id: ac230961-8edd-4176-9590-1a127a759830
	I0108 12:53:41.301333   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:41.301357   10230 request.go:1154] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b0361e2f-3ac8-4575-88dc-aebe0c85a19d","resourceVersion":"376","creationTimestamp":"2023-01-08T20:49:50Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-01-08T20:49:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0108 12:53:41.301435   10230 addons.go:227] Setting addon default-storageclass=true in "multinode-124908"
	W0108 12:53:41.301445   10230 addons.go:236] addon default-storageclass should already be in state true
	I0108 12:53:41.301469   10230 host.go:66] Checking if "multinode-124908" exists ...
	I0108 12:53:41.301877   10230 cli_runner.go:164] Run: docker container inspect multinode-124908 --format={{.State.Status}}
	I0108 12:53:41.303892   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:41.303920   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:41.303929   10230 round_trippers.go:580]     Audit-Id: bb8ec0c2-2d28-41fc-bbf7-ee009aa8292a
	I0108 12:53:41.303937   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:41.303952   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:41.303957   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:41.303962   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:41.303983   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:41 GMT
	I0108 12:53:41.304115   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:41.304418   10230 node_ready.go:49] node "multinode-124908" has status "Ready":"True"
	I0108 12:53:41.304427   10230 node_ready.go:38] duration metric: took 4.579674ms waiting for node "multinode-124908" to be "Ready" ...
	I0108 12:53:41.304436   10230 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 12:53:41.356520   10230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51400 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/multinode-124908/id_rsa Username:docker}
	I0108 12:53:41.361820   10230 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 12:53:41.361832   10230 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 12:53:41.361914   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908
	I0108 12:53:41.420640   10230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51400 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/multinode-124908/id_rsa Username:docker}
	I0108 12:53:41.447685   10230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 12:53:41.500655   10230 request.go:614] Waited for 196.170537ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods
	I0108 12:53:41.500693   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods
	I0108 12:53:41.500699   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:41.500705   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:41.500712   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:41.504471   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:41.504493   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:41.504501   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:41.504508   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:41.504515   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:41 GMT
	I0108 12:53:41.504522   10230 round_trippers.go:580]     Audit-Id: 1903d581-7a72-4925-95b4-95eb1d8d8661
	I0108 12:53:41.504529   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:41.504540   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:41.507886   10230 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"803"},"items":[{"metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"799","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84955 chars]
	I0108 12:53:41.510464   10230 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-f6gqj" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:41.513293   10230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 12:53:41.682063   10230 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0108 12:53:41.684828   10230 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0108 12:53:41.687336   10230 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0108 12:53:41.689470   10230 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0108 12:53:41.691409   10230 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0108 12:53:41.702431   10230 request.go:614] Waited for 191.926096ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:41.702492   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/coredns-565d847f94-f6gqj
	I0108 12:53:41.702499   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:41.702508   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:41.702516   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:41.737837   10230 round_trippers.go:574] Response Status: 200 OK in 35 milliseconds
	I0108 12:53:41.737859   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:41.737870   10230 round_trippers.go:580]     Audit-Id: c3ee1736-1ec3-4ce1-9bae-c53ccdf0e2cb
	I0108 12:53:41.737884   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:41.737894   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:41.737906   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:41.737915   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:41.737925   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:41 GMT
	I0108 12:53:41.738592   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"799","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6552 chars]
	I0108 12:53:41.738690   10230 command_runner.go:130] > pod/storage-provisioner configured
	I0108 12:53:41.764480   10230 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0108 12:53:41.793895   10230 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0108 12:53:41.836360   10230 addons.go:488] enableAddons completed in 716.99416ms
	I0108 12:53:41.901337   10230 request.go:614] Waited for 162.271701ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:41.901431   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:41.901440   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:41.901453   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:41.901463   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:41.905891   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:41.905906   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:41.905915   10230 round_trippers.go:580]     Audit-Id: 4b73074a-5b37-4e19-969a-f0e4a6534ce4
	I0108 12:53:41.905935   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:41.905940   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:41.905945   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:41.905950   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:41.905955   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:41 GMT
	I0108 12:53:41.906031   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:41.906248   10230 pod_ready.go:92] pod "coredns-565d847f94-f6gqj" in "kube-system" namespace has status "Ready":"True"
	I0108 12:53:41.906271   10230 pod_ready.go:81] duration metric: took 395.782805ms waiting for pod "coredns-565d847f94-f6gqj" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:41.906277   10230 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-124908" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:42.100909   10230 request.go:614] Waited for 194.549653ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/etcd-multinode-124908
	I0108 12:53:42.100973   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/etcd-multinode-124908
	I0108 12:53:42.100983   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:42.100998   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:42.101012   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:42.104923   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:42.104943   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:42.104954   10230 round_trippers.go:580]     Audit-Id: 8a6a6b81-b12e-4642-8e80-283d5924fa8c
	I0108 12:53:42.104977   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:42.104982   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:42.104988   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:42.104994   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:42.105000   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:42 GMT
	I0108 12:53:42.105075   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-124908","namespace":"kube-system","uid":"9cf1a608-48d9-453e-bd35-263521e756e4","resourceVersion":"742","creationTimestamp":"2023-01-08T20:49:35Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"83cad18480e9029408294e1fc4223245","kubernetes.io/config.mirror":"83cad18480e9029408294e1fc4223245","kubernetes.io/config.seen":"2023-01-08T20:49:35.642390520Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6045 chars]
	I0108 12:53:42.300421   10230 request.go:614] Waited for 195.058095ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:42.300524   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:42.300538   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:42.300551   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:42.300567   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:42.304869   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:42.304881   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:42.304887   10230 round_trippers.go:580]     Audit-Id: 931a86dc-9c69-4693-9280-45a9e62b9a66
	I0108 12:53:42.304892   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:42.304897   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:42.304902   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:42.304907   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:42.304912   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:42 GMT
	I0108 12:53:42.304989   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:42.305207   10230 pod_ready.go:92] pod "etcd-multinode-124908" in "kube-system" namespace has status "Ready":"True"
	I0108 12:53:42.305214   10230 pod_ready.go:81] duration metric: took 398.937565ms waiting for pod "etcd-multinode-124908" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:42.305232   10230 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-124908" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:42.502373   10230 request.go:614] Waited for 197.099033ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-124908
	I0108 12:53:42.502428   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-124908
	I0108 12:53:42.502437   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:42.502454   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:42.502487   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:42.506954   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:42.506971   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:42.506977   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:42.506984   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:42.506989   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:42.506995   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:42 GMT
	I0108 12:53:42.506999   10230 round_trippers.go:580]     Audit-Id: 74d01110-1212-41f6-885d-19ae1aad79e8
	I0108 12:53:42.507004   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:42.507084   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-124908","namespace":"kube-system","uid":"7e7e7fa5-c965-4737-83b1-afd48eb87547","resourceVersion":"779","creationTimestamp":"2023-01-08T20:49:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"7e3bdd07923da057548f2016d7097374","kubernetes.io/config.mirror":"7e3bdd07923da057548f2016d7097374","kubernetes.io/config.seen":"2023-01-08T20:49:35.642400230Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8429 chars]
	I0108 12:53:42.700699   10230 request.go:614] Waited for 193.313701ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:42.700764   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:42.700776   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:42.700788   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:42.700798   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:42.705107   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:42.705123   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:42.705131   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:42.705138   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:42.705146   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:42.705153   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:42 GMT
	I0108 12:53:42.705159   10230 round_trippers.go:580]     Audit-Id: a51a02b8-1bba-4753-819b-86dc4a494c6a
	I0108 12:53:42.705165   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:42.705243   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:42.705485   10230 pod_ready.go:92] pod "kube-apiserver-multinode-124908" in "kube-system" namespace has status "Ready":"True"
	I0108 12:53:42.705491   10230 pod_ready.go:81] duration metric: took 400.256787ms waiting for pod "kube-apiserver-multinode-124908" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:42.705498   10230 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-124908" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:42.902435   10230 request.go:614] Waited for 196.87458ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-124908
	I0108 12:53:42.902509   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-124908
	I0108 12:53:42.902519   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:42.902531   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:42.902542   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:42.906293   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:42.906312   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:42.906320   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:42.906331   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:42.906339   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:42 GMT
	I0108 12:53:42.906346   10230 round_trippers.go:580]     Audit-Id: 161d2899-28ff-444f-b1e8-fbd0a3430b66
	I0108 12:53:42.906367   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:42.906371   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:42.906691   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-124908","namespace":"kube-system","uid":"41ff8cf2-6b35-47c2-8f48-120e6adf98bb","resourceVersion":"763","creationTimestamp":"2023-01-08T20:49:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d5faaebc8229ee8bf257c9d1c46ead3c","kubernetes.io/config.mirror":"d5faaebc8229ee8bf257c9d1c46ead3c","kubernetes.io/config.seen":"2023-01-08T20:49:35.642401085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8002 chars]
	I0108 12:53:43.100703   10230 request.go:614] Waited for 193.667903ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:43.100770   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:43.100778   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:43.100790   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:43.100802   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:43.105234   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:43.105245   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:43.105251   10230 round_trippers.go:580]     Audit-Id: e4c6d05f-c5f1-494d-a339-a2849a6c8bc9
	I0108 12:53:43.105261   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:43.105266   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:43.105271   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:43.105276   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:43.105281   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:43 GMT
	I0108 12:53:43.105350   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:43.105553   10230 pod_ready.go:92] pod "kube-controller-manager-multinode-124908" in "kube-system" namespace has status "Ready":"True"
	I0108 12:53:43.105561   10230 pod_ready.go:81] duration metric: took 400.062598ms waiting for pod "kube-controller-manager-multinode-124908" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:43.105568   10230 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hq6ms" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:43.302414   10230 request.go:614] Waited for 196.796115ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-proxy-hq6ms
	I0108 12:53:43.302510   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-proxy-hq6ms
	I0108 12:53:43.302546   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:43.302560   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:43.302573   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:43.307087   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:43.307100   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:43.307106   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:43 GMT
	I0108 12:53:43.307114   10230 round_trippers.go:580]     Audit-Id: 2387e544-3b0a-4e3f-ae9e-dba95cda9f00
	I0108 12:53:43.307120   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:43.307126   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:43.307131   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:43.307135   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:43.307184   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hq6ms","generateName":"kube-proxy-","namespace":"kube-system","uid":"3deaa832-bac0-47e3-bdef-482b094bf90f","resourceVersion":"669","creationTimestamp":"2023-01-08T20:51:09Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ceebf5ed-bacc-4cbe-87e3-48c583ee7679","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:51:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ceebf5ed-bacc-4cbe-87e3-48c583ee7679\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5743 chars]
	I0108 12:53:43.501055   10230 request.go:614] Waited for 193.5622ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/nodes/multinode-124908-m03
	I0108 12:53:43.501110   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908-m03
	I0108 12:53:43.501118   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:43.501132   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:43.501145   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:43.505291   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:43.505318   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:43.505324   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:43.505328   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:43 GMT
	I0108 12:53:43.505333   10230 round_trippers.go:580]     Audit-Id: 5957559b-79e2-4dc0-8c3b-52b24355a1ac
	I0108 12:53:43.505340   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:43.505345   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:43.505350   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:43.505444   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908-m03","uid":"00d677bd-1b22-4d63-8258-31e7e0d73f15","resourceVersion":"756","creationTimestamp":"2023-01-08T20:51:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:51:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:51:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 4321 chars]
	I0108 12:53:43.505651   10230 pod_ready.go:92] pod "kube-proxy-hq6ms" in "kube-system" namespace has status "Ready":"True"
	I0108 12:53:43.505672   10230 pod_ready.go:81] duration metric: took 400.091366ms waiting for pod "kube-proxy-hq6ms" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:43.505682   10230 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kzv6k" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:43.701142   10230 request.go:614] Waited for 195.377111ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-proxy-kzv6k
	I0108 12:53:43.701246   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-proxy-kzv6k
	I0108 12:53:43.701258   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:43.701270   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:43.701280   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:43.705772   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:43.705801   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:43.705807   10230 round_trippers.go:580]     Audit-Id: d867dddd-1ab7-4d7f-9efb-c519e160b01d
	I0108 12:53:43.705812   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:43.705817   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:43.705822   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:43.705827   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:43.705832   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:43 GMT
	I0108 12:53:43.705905   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kzv6k","generateName":"kube-proxy-","namespace":"kube-system","uid":"05a4b261-aa83-4e23-83c6-0a50d659b5b7","resourceVersion":"705","creationTimestamp":"2023-01-08T20:49:47Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ceebf5ed-bacc-4cbe-87e3-48c583ee7679","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ceebf5ed-bacc-4cbe-87e3-48c583ee7679\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5735 chars]
	I0108 12:53:43.901124   10230 request.go:614] Waited for 194.938036ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:43.901166   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:43.901174   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:43.901203   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:43.901211   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:43.903849   10230 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 12:53:43.903864   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:43.903872   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:43.903879   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:43.903883   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:43.903890   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:43 GMT
	I0108 12:53:43.903895   10230 round_trippers.go:580]     Audit-Id: 8c78c216-a837-4f50-ba4e-5583ad57e448
	I0108 12:53:43.903900   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:43.903979   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:43.904197   10230 pod_ready.go:92] pod "kube-proxy-kzv6k" in "kube-system" namespace has status "Ready":"True"
	I0108 12:53:43.904204   10230 pod_ready.go:81] duration metric: took 398.522965ms waiting for pod "kube-proxy-kzv6k" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:43.904211   10230 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vx6bb" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:44.101943   10230 request.go:614] Waited for 197.606379ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-proxy-vx6bb
	I0108 12:53:44.101990   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-proxy-vx6bb
	I0108 12:53:44.101998   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:44.102010   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:44.102024   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:44.105863   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:44.105878   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:44.105893   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:44 GMT
	I0108 12:53:44.105902   10230 round_trippers.go:580]     Audit-Id: bbf0972b-ff91-49c7-a5e8-3bbcc67c6cbc
	I0108 12:53:44.105917   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:44.105924   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:44.105933   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:44.105941   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:44.106196   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vx6bb","generateName":"kube-proxy-","namespace":"kube-system","uid":"7bff7041-dbf7-4143-9f70-52a12dd69f64","resourceVersion":"467","creationTimestamp":"2023-01-08T20:50:25Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ceebf5ed-bacc-4cbe-87e3-48c583ee7679","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:50:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ceebf5ed-bacc-4cbe-87e3-48c583ee7679\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5536 chars]
	I0108 12:53:44.300491   10230 request.go:614] Waited for 193.961968ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/nodes/multinode-124908-m02
	I0108 12:53:44.300555   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908-m02
	I0108 12:53:44.300563   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:44.300577   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:44.300590   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:44.304851   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:44.304866   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:44.304874   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:44 GMT
	I0108 12:53:44.304881   10230 round_trippers.go:580]     Audit-Id: 39decd65-fe98-451f-92d9-49d138f96fdf
	I0108 12:53:44.304888   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:44.304895   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:44.304902   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:44.304910   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:44.304984   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908-m02","uid":"06778a45-7a2c-401b-918a-d4864150c87c","resourceVersion":"587","creationTimestamp":"2023-01-08T20:50:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:50:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4504 chars]
	I0108 12:53:44.305219   10230 pod_ready.go:92] pod "kube-proxy-vx6bb" in "kube-system" namespace has status "Ready":"True"
	I0108 12:53:44.305226   10230 pod_ready.go:81] duration metric: took 401.015806ms waiting for pod "kube-proxy-vx6bb" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:44.305234   10230 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-124908" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:44.500400   10230 request.go:614] Waited for 195.127146ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-124908
	I0108 12:53:44.500503   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-124908
	I0108 12:53:44.500514   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:44.500525   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:44.500536   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:44.504669   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:44.504681   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:44.504687   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:44.504692   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:44.504697   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:44.504703   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:44.504708   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:44 GMT
	I0108 12:53:44.504712   10230 round_trippers.go:580]     Audit-Id: 3bcb29be-4e6f-41c9-a112-4cd5f16ff2fa
	I0108 12:53:44.504774   10230 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-124908","namespace":"kube-system","uid":"3dd0df78-6cad-4b47-a66f-74c412846b79","resourceVersion":"775","creationTimestamp":"2023-01-08T20:49:35Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"165a046b58d2e71b3de2a638cd49c0fb","kubernetes.io/config.mirror":"165a046b58d2e71b3de2a638cd49c0fb","kubernetes.io/config.seen":"2023-01-08T20:49:35.642401740Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4886 chars]
	I0108 12:53:44.701810   10230 request.go:614] Waited for 196.77847ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:44.701946   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes/multinode-124908
	I0108 12:53:44.701956   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:44.701970   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:44.701982   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:44.706367   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:44.706378   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:44.706384   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:44 GMT
	I0108 12:53:44.706389   10230 round_trippers.go:580]     Audit-Id: cef8e304-7345-4cd4-80b3-d0b61b739847
	I0108 12:53:44.706398   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:44.706403   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:44.706409   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:44.706414   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:44.706491   10230 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-08T20:49:32Z","fieldsType":"FieldsV1","fi [truncated 5275 chars]
	I0108 12:53:44.706689   10230 pod_ready.go:92] pod "kube-scheduler-multinode-124908" in "kube-system" namespace has status "Ready":"True"
	I0108 12:53:44.706696   10230 pod_ready.go:81] duration metric: took 401.46151ms waiting for pod "kube-scheduler-multinode-124908" in "kube-system" namespace to be "Ready" ...
	I0108 12:53:44.706703   10230 pod_ready.go:38] duration metric: took 3.40229807s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 12:53:44.706714   10230 api_server.go:51] waiting for apiserver process to appear ...
	I0108 12:53:44.706777   10230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 12:53:44.716228   10230 command_runner.go:130] > 1732
	I0108 12:53:44.716896   10230 api_server.go:71] duration metric: took 3.597583907s to wait for apiserver process to appear ...
	I0108 12:53:44.716908   10230 api_server.go:87] waiting for apiserver healthz status ...
	I0108 12:53:44.716914   10230 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51399/healthz ...
	I0108 12:53:44.722287   10230 api_server.go:278] https://127.0.0.1:51399/healthz returned 200:
	ok
	I0108 12:53:44.722325   10230 round_trippers.go:463] GET https://127.0.0.1:51399/version
	I0108 12:53:44.722330   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:44.722337   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:44.722343   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:44.723692   10230 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 12:53:44.723700   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:44.723708   10230 round_trippers.go:580]     Content-Length: 263
	I0108 12:53:44.723713   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:44 GMT
	I0108 12:53:44.723718   10230 round_trippers.go:580]     Audit-Id: 95cb3a4f-fb50-4d2f-9a71-751e3f025983
	I0108 12:53:44.723723   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:44.723727   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:44.723732   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:44.723737   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:44.723746   10230 request.go:1154] Response Body: {
	  "major": "1",
	  "minor": "25",
	  "gitVersion": "v1.25.3",
	  "gitCommit": "434bfd82814af038ad94d62ebe59b133fcb50506",
	  "gitTreeState": "clean",
	  "buildDate": "2022-10-12T10:49:09Z",
	  "goVersion": "go1.19.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0108 12:53:44.723767   10230 api_server.go:140] control plane version: v1.25.3
	I0108 12:53:44.723774   10230 api_server.go:130] duration metric: took 6.86244ms to wait for apiserver health ...
	I0108 12:53:44.723782   10230 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 12:53:44.901887   10230 request.go:614] Waited for 178.017792ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods
	I0108 12:53:44.901944   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods
	I0108 12:53:44.901954   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:44.902000   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:44.902012   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:44.907621   10230 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 12:53:44.907638   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:44.907646   10230 round_trippers.go:580]     Audit-Id: 79ddedb0-5762-44dd-813a-53b57d52567d
	I0108 12:53:44.907652   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:44.907659   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:44.907665   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:44.907670   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:44.907676   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:44 GMT
	I0108 12:53:44.908713   10230 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"805"},"items":[{"metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"799","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84955 chars]
	I0108 12:53:44.910646   10230 system_pods.go:59] 12 kube-system pods found
	I0108 12:53:44.910656   10230 system_pods.go:61] "coredns-565d847f94-f6gqj" [1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7] Running
	I0108 12:53:44.910660   10230 system_pods.go:61] "etcd-multinode-124908" [9cf1a608-48d9-453e-bd35-263521e756e4] Running
	I0108 12:53:44.910665   10230 system_pods.go:61] "kindnet-4j92t" [2e0611f9-b324-4059-b858-ca1cc99bb8d9] Running
	I0108 12:53:44.910668   10230 system_pods.go:61] "kindnet-79h6s" [8899610c-9df6-488d-af2f-2848f1ce546b] Running
	I0108 12:53:44.910672   10230 system_pods.go:61] "kindnet-pj4l5" [82ac6efa-2268-472b-bd72-171778eabeb6] Running
	I0108 12:53:44.910675   10230 system_pods.go:61] "kube-apiserver-multinode-124908" [7e7e7fa5-c965-4737-83b1-afd48eb87547] Running
	I0108 12:53:44.910680   10230 system_pods.go:61] "kube-controller-manager-multinode-124908" [41ff8cf2-6b35-47c2-8f48-120e6adf98bb] Running
	I0108 12:53:44.910683   10230 system_pods.go:61] "kube-proxy-hq6ms" [3deaa832-bac0-47e3-bdef-482b094bf90f] Running
	I0108 12:53:44.910687   10230 system_pods.go:61] "kube-proxy-kzv6k" [05a4b261-aa83-4e23-83c6-0a50d659b5b7] Running
	I0108 12:53:44.910692   10230 system_pods.go:61] "kube-proxy-vx6bb" [7bff7041-dbf7-4143-9f70-52a12dd69f64] Running
	I0108 12:53:44.910696   10230 system_pods.go:61] "kube-scheduler-multinode-124908" [3dd0df78-6cad-4b47-a66f-74c412846b79] Running
	I0108 12:53:44.910701   10230 system_pods.go:61] "storage-provisioner" [6eda9f8e-814b-4a17-9ec8-89bd52973d7b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0108 12:53:44.910705   10230 system_pods.go:74] duration metric: took 186.921464ms to wait for pod list to return data ...
	I0108 12:53:44.910711   10230 default_sa.go:34] waiting for default service account to be created ...
	I0108 12:53:45.102385   10230 request.go:614] Waited for 191.618093ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/namespaces/default/serviceaccounts
	I0108 12:53:45.102509   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/default/serviceaccounts
	I0108 12:53:45.102517   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:45.102529   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:45.102539   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:45.106506   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:45.106521   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:45.106529   10230 round_trippers.go:580]     Audit-Id: 6cf08d88-678f-4306-abeb-5695da6ee543
	I0108 12:53:45.106543   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:45.106551   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:45.106557   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:45.106564   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:45.106570   10230 round_trippers.go:580]     Content-Length: 261
	I0108 12:53:45.106577   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:45 GMT
	I0108 12:53:45.106590   10230 request.go:1154] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"805"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"ef223f23-cc02-45b1-abac-dc1674e8bcea","resourceVersion":"324","creationTimestamp":"2023-01-08T20:49:48Z"}}]}
	I0108 12:53:45.106747   10230 default_sa.go:45] found service account: "default"
	I0108 12:53:45.106758   10230 default_sa.go:55] duration metric: took 196.044911ms for default service account to be created ...
	I0108 12:53:45.106765   10230 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 12:53:45.300775   10230 request.go:614] Waited for 193.937241ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods
	I0108 12:53:45.300834   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/namespaces/kube-system/pods
	I0108 12:53:45.300843   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:45.300856   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:45.300869   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:45.306088   10230 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 12:53:45.306101   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:45.306107   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:45 GMT
	I0108 12:53:45.306113   10230 round_trippers.go:580]     Audit-Id: 646c793d-91d1-4135-b873-475fe1917e32
	I0108 12:53:45.306146   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:45.306163   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:45.306173   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:45.306208   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:45.307549   10230 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"805"},"items":[{"metadata":{"name":"coredns-565d847f94-f6gqj","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7","resourceVersion":"799","creationTimestamp":"2023-01-08T20:49:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"f3c459f2-b1da-4bbb-86fc-9824a9df345b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-08T20:49:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3c459f2-b1da-4bbb-86fc-9824a9df345b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84955 chars]
	I0108 12:53:45.309479   10230 system_pods.go:86] 12 kube-system pods found
	I0108 12:53:45.309489   10230 system_pods.go:89] "coredns-565d847f94-f6gqj" [1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7] Running
	I0108 12:53:45.309494   10230 system_pods.go:89] "etcd-multinode-124908" [9cf1a608-48d9-453e-bd35-263521e756e4] Running
	I0108 12:53:45.309498   10230 system_pods.go:89] "kindnet-4j92t" [2e0611f9-b324-4059-b858-ca1cc99bb8d9] Running
	I0108 12:53:45.309501   10230 system_pods.go:89] "kindnet-79h6s" [8899610c-9df6-488d-af2f-2848f1ce546b] Running
	I0108 12:53:45.309505   10230 system_pods.go:89] "kindnet-pj4l5" [82ac6efa-2268-472b-bd72-171778eabeb6] Running
	I0108 12:53:45.309509   10230 system_pods.go:89] "kube-apiserver-multinode-124908" [7e7e7fa5-c965-4737-83b1-afd48eb87547] Running
	I0108 12:53:45.309513   10230 system_pods.go:89] "kube-controller-manager-multinode-124908" [41ff8cf2-6b35-47c2-8f48-120e6adf98bb] Running
	I0108 12:53:45.309518   10230 system_pods.go:89] "kube-proxy-hq6ms" [3deaa832-bac0-47e3-bdef-482b094bf90f] Running
	I0108 12:53:45.309521   10230 system_pods.go:89] "kube-proxy-kzv6k" [05a4b261-aa83-4e23-83c6-0a50d659b5b7] Running
	I0108 12:53:45.309524   10230 system_pods.go:89] "kube-proxy-vx6bb" [7bff7041-dbf7-4143-9f70-52a12dd69f64] Running
	I0108 12:53:45.309531   10230 system_pods.go:89] "kube-scheduler-multinode-124908" [3dd0df78-6cad-4b47-a66f-74c412846b79] Running
	I0108 12:53:45.309537   10230 system_pods.go:89] "storage-provisioner" [6eda9f8e-814b-4a17-9ec8-89bd52973d7b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0108 12:53:45.309543   10230 system_pods.go:126] duration metric: took 202.776385ms to wait for k8s-apps to be running ...
	I0108 12:53:45.309548   10230 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 12:53:45.309615   10230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 12:53:45.319693   10230 system_svc.go:56] duration metric: took 10.140632ms WaitForService to wait for kubelet.
	I0108 12:53:45.319706   10230 kubeadm.go:573] duration metric: took 4.200401918s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 12:53:45.319724   10230 node_conditions.go:102] verifying NodePressure condition ...
	I0108 12:53:45.502379   10230 request.go:614] Waited for 182.601969ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51399/api/v1/nodes
	I0108 12:53:45.502516   10230 round_trippers.go:463] GET https://127.0.0.1:51399/api/v1/nodes
	I0108 12:53:45.502527   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:45.502543   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:45.502555   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:45.507021   10230 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 12:53:45.507035   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:45.507041   10230 round_trippers.go:580]     Audit-Id: 5c2ce207-cd5f-4861-9459-fff03ac1a13e
	I0108 12:53:45.507046   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:45.507051   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:45.507057   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:45.507073   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:45.507082   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:45 GMT
	I0108 12:53:45.507185   10230 request.go:1154] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"805"},"items":[{"metadata":{"name":"multinode-124908","uid":"d7091383-3261-4c9b-af3d-c3ff89606c1e","resourceVersion":"690","creationTimestamp":"2023-01-08T20:49:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-124908","kubernetes.io/os":"linux","minikube.k8s.io/commit":"85283e47cf16e06ca2b7e3404d99b788f950f286","minikube.k8s.io/name":"multinode-124908","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_08T12_49_36_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 16137 chars]
	I0108 12:53:45.507611   10230 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0108 12:53:45.507619   10230 node_conditions.go:123] node cpu capacity is 6
	I0108 12:53:45.507633   10230 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0108 12:53:45.507637   10230 node_conditions.go:123] node cpu capacity is 6
	I0108 12:53:45.507640   10230 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0108 12:53:45.507644   10230 node_conditions.go:123] node cpu capacity is 6
	I0108 12:53:45.507647   10230 node_conditions.go:105] duration metric: took 187.921257ms to run NodePressure ...
	I0108 12:53:45.507654   10230 start.go:217] waiting for startup goroutines ...
	I0108 12:53:45.508317   10230 config.go:180] Loaded profile config "multinode-124908": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 12:53:45.508386   10230 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/config.json ...
	I0108 12:53:45.529445   10230 out.go:177] * Starting worker node multinode-124908-m02 in cluster multinode-124908
	I0108 12:53:45.573063   10230 cache.go:120] Beginning downloading kic base image for docker with docker
	I0108 12:53:45.594442   10230 out.go:177] * Pulling base image ...
	I0108 12:53:45.637218   10230 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0108 12:53:45.637228   10230 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 12:53:45.637265   10230 cache.go:57] Caching tarball of preloaded images
	I0108 12:53:45.637476   10230 preload.go:174] Found /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 12:53:45.637499   10230 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I0108 12:53:45.638547   10230 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/config.json ...
	I0108 12:53:45.695672   10230 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 12:53:45.695688   10230 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 12:53:45.695706   10230 cache.go:193] Successfully downloaded all kic artifacts
	I0108 12:53:45.695737   10230 start.go:364] acquiring machines lock for multinode-124908-m02: {Name:mk32c9261441e7ef10a9285ab8073f1064c4c4e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 12:53:45.695825   10230 start.go:368] acquired machines lock for "multinode-124908-m02" in 76.422µs
	I0108 12:53:45.695846   10230 start.go:96] Skipping create...Using existing machine configuration
	I0108 12:53:45.695852   10230 fix.go:55] fixHost starting: m02
	I0108 12:53:45.696139   10230 cli_runner.go:164] Run: docker container inspect multinode-124908-m02 --format={{.State.Status}}
	I0108 12:53:45.752049   10230 fix.go:103] recreateIfNeeded on multinode-124908-m02: state=Stopped err=<nil>
	W0108 12:53:45.752073   10230 fix.go:129] unexpected machine state, will restart: <nil>
	I0108 12:53:45.773894   10230 out.go:177] * Restarting existing docker container for "multinode-124908-m02" ...
	I0108 12:53:45.815868   10230 cli_runner.go:164] Run: docker start multinode-124908-m02
	I0108 12:53:46.148768   10230 cli_runner.go:164] Run: docker container inspect multinode-124908-m02 --format={{.State.Status}}
	I0108 12:53:46.212326   10230 kic.go:415] container "multinode-124908-m02" state is running.
	I0108 12:53:46.212952   10230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-124908-m02
	I0108 12:53:46.277649   10230 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/config.json ...
	I0108 12:53:46.278157   10230 machine.go:88] provisioning docker machine ...
	I0108 12:53:46.278171   10230 ubuntu.go:169] provisioning hostname "multinode-124908-m02"
	I0108 12:53:46.278264   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908-m02
	I0108 12:53:46.353849   10230 main.go:134] libmachine: Using SSH client type: native
	I0108 12:53:46.354038   10230 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51429 <nil> <nil>}
	I0108 12:53:46.354048   10230 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-124908-m02 && echo "multinode-124908-m02" | sudo tee /etc/hostname
	I0108 12:53:46.547570   10230 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-124908-m02
	
	I0108 12:53:46.547681   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908-m02
	I0108 12:53:46.616784   10230 main.go:134] libmachine: Using SSH client type: native
	I0108 12:53:46.616961   10230 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51429 <nil> <nil>}
	I0108 12:53:46.616976   10230 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-124908-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-124908-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-124908-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 12:53:46.739799   10230 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 12:53:46.739816   10230 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-2761/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-2761/.minikube}
	I0108 12:53:46.739829   10230 ubuntu.go:177] setting up certificates
	I0108 12:53:46.739840   10230 provision.go:83] configureAuth start
	I0108 12:53:46.739937   10230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-124908-m02
	I0108 12:53:46.805029   10230 provision.go:138] copyHostCerts
	I0108 12:53:46.805084   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem
	I0108 12:53:46.805143   10230 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem, removing ...
	I0108 12:53:46.805149   10230 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem
	I0108 12:53:46.805257   10230 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem (1082 bytes)
	I0108 12:53:46.805418   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem
	I0108 12:53:46.805463   10230 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem, removing ...
	I0108 12:53:46.805468   10230 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem
	I0108 12:53:46.805537   10230 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem (1123 bytes)
	I0108 12:53:46.805666   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem
	I0108 12:53:46.805703   10230 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem, removing ...
	I0108 12:53:46.805707   10230 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem
	I0108 12:53:46.805770   10230 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem (1675 bytes)
	I0108 12:53:46.805897   10230 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem org=jenkins.multinode-124908-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-124908-m02]
	I0108 12:53:46.916825   10230 provision.go:172] copyRemoteCerts
	I0108 12:53:46.916904   10230 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 12:53:46.916975   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908-m02
	I0108 12:53:46.979531   10230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51429 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/multinode-124908-m02/id_rsa Username:docker}
	I0108 12:53:47.085539   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 12:53:47.085649   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 12:53:47.150159   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 12:53:47.150271   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0108 12:53:47.168673   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 12:53:47.168767   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 12:53:47.186170   10230 provision.go:86] duration metric: configureAuth took 446.326038ms
	I0108 12:53:47.186183   10230 ubuntu.go:193] setting minikube options for container-runtime
	I0108 12:53:47.186378   10230 config.go:180] Loaded profile config "multinode-124908": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 12:53:47.186454   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908-m02
	I0108 12:53:47.246101   10230 main.go:134] libmachine: Using SSH client type: native
	I0108 12:53:47.246261   10230 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51429 <nil> <nil>}
	I0108 12:53:47.246270   10230 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0108 12:53:47.361836   10230 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0108 12:53:47.361853   10230 ubuntu.go:71] root file system type: overlay
	I0108 12:53:47.362094   10230 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0108 12:53:47.362175   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908-m02
	I0108 12:53:47.420915   10230 main.go:134] libmachine: Using SSH client type: native
	I0108 12:53:47.421070   10230 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51429 <nil> <nil>}
	I0108 12:53:47.421118   10230 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0108 12:53:47.546310   10230 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0108 12:53:47.546417   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908-m02
	I0108 12:53:47.605886   10230 main.go:134] libmachine: Using SSH client type: native
	I0108 12:53:47.606037   10230 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51429 <nil> <nil>}
	I0108 12:53:47.606050   10230 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0108 12:53:47.729868   10230 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 12:53:47.729893   10230 machine.go:91] provisioned docker machine in 1.451746966s
	I0108 12:53:47.729901   10230 start.go:300] post-start starting for "multinode-124908-m02" (driver="docker")
	I0108 12:53:47.729908   10230 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 12:53:47.730011   10230 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 12:53:47.730086   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908-m02
	I0108 12:53:47.787960   10230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51429 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/multinode-124908-m02/id_rsa Username:docker}
	I0108 12:53:47.874997   10230 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 12:53:47.878585   10230 command_runner.go:130] > NAME="Ubuntu"
	I0108 12:53:47.878594   10230 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0108 12:53:47.878599   10230 command_runner.go:130] > ID=ubuntu
	I0108 12:53:47.878602   10230 command_runner.go:130] > ID_LIKE=debian
	I0108 12:53:47.878607   10230 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0108 12:53:47.878611   10230 command_runner.go:130] > VERSION_ID="20.04"
	I0108 12:53:47.878615   10230 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0108 12:53:47.878620   10230 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0108 12:53:47.878624   10230 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0108 12:53:47.878631   10230 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0108 12:53:47.878638   10230 command_runner.go:130] > VERSION_CODENAME=focal
	I0108 12:53:47.878642   10230 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0108 12:53:47.878688   10230 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 12:53:47.878701   10230 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 12:53:47.878708   10230 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 12:53:47.878713   10230 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 12:53:47.878718   10230 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2761/.minikube/addons for local assets ...
	I0108 12:53:47.878810   10230 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2761/.minikube/files for local assets ...
	I0108 12:53:47.878967   10230 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem -> 40832.pem in /etc/ssl/certs
	I0108 12:53:47.878973   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem -> /etc/ssl/certs/40832.pem
	I0108 12:53:47.879172   10230 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 12:53:47.886549   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem --> /etc/ssl/certs/40832.pem (1708 bytes)
	I0108 12:53:47.903518   10230 start.go:303] post-start completed in 173.609056ms
	I0108 12:53:47.903608   10230 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 12:53:47.903678   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908-m02
	I0108 12:53:47.961498   10230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51429 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/multinode-124908-m02/id_rsa Username:docker}
	I0108 12:53:48.044531   10230 command_runner.go:130] > 12%!
	(MISSING)I0108 12:53:48.044617   10230 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 12:53:48.049126   10230 command_runner.go:130] > 49G
	I0108 12:53:48.049444   10230 fix.go:57] fixHost completed within 2.353620004s
	I0108 12:53:48.049455   10230 start.go:83] releasing machines lock for "multinode-124908-m02", held for 2.35365316s
	I0108 12:53:48.049565   10230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-124908-m02
	I0108 12:53:48.130822   10230 out.go:177] * Found network options:
	I0108 12:53:48.153112   10230 out.go:177]   - NO_PROXY=192.168.58.2
	W0108 12:53:48.174750   10230 proxy.go:119] fail to check proxy env: Error ip not in block
	W0108 12:53:48.174813   10230 proxy.go:119] fail to check proxy env: Error ip not in block
	I0108 12:53:48.174918   10230 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 12:53:48.174928   10230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0108 12:53:48.174998   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908-m02
	I0108 12:53:48.175000   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908-m02
	I0108 12:53:48.237628   10230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51429 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/multinode-124908-m02/id_rsa Username:docker}
	I0108 12:53:48.237818   10230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51429 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/multinode-124908-m02/id_rsa Username:docker}
	I0108 12:53:48.376909   10230 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0108 12:53:48.378363   10230 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I0108 12:53:48.391852   10230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 12:53:48.462953   10230 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0108 12:53:48.555269   10230 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0108 12:53:48.565129   10230 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0108 12:53:48.565238   10230 command_runner.go:130] > [Unit]
	I0108 12:53:48.565249   10230 command_runner.go:130] > Description=Docker Application Container Engine
	I0108 12:53:48.565254   10230 command_runner.go:130] > Documentation=https://docs.docker.com
	I0108 12:53:48.565259   10230 command_runner.go:130] > BindsTo=containerd.service
	I0108 12:53:48.565266   10230 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0108 12:53:48.565273   10230 command_runner.go:130] > Wants=network-online.target
	I0108 12:53:48.565283   10230 command_runner.go:130] > Requires=docker.socket
	I0108 12:53:48.565290   10230 command_runner.go:130] > StartLimitBurst=3
	I0108 12:53:48.565298   10230 command_runner.go:130] > StartLimitIntervalSec=60
	I0108 12:53:48.565306   10230 command_runner.go:130] > [Service]
	I0108 12:53:48.565312   10230 command_runner.go:130] > Type=notify
	I0108 12:53:48.565321   10230 command_runner.go:130] > Restart=on-failure
	I0108 12:53:48.565327   10230 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I0108 12:53:48.565334   10230 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0108 12:53:48.565349   10230 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0108 12:53:48.565357   10230 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0108 12:53:48.565362   10230 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0108 12:53:48.565369   10230 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0108 12:53:48.565375   10230 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0108 12:53:48.565382   10230 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0108 12:53:48.565397   10230 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0108 12:53:48.565403   10230 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0108 12:53:48.565407   10230 command_runner.go:130] > ExecStart=
	I0108 12:53:48.565418   10230 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0108 12:53:48.565423   10230 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0108 12:53:48.565434   10230 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0108 12:53:48.565439   10230 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0108 12:53:48.565443   10230 command_runner.go:130] > LimitNOFILE=infinity
	I0108 12:53:48.565447   10230 command_runner.go:130] > LimitNPROC=infinity
	I0108 12:53:48.565450   10230 command_runner.go:130] > LimitCORE=infinity
	I0108 12:53:48.565455   10230 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0108 12:53:48.565460   10230 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0108 12:53:48.565463   10230 command_runner.go:130] > TasksMax=infinity
	I0108 12:53:48.565467   10230 command_runner.go:130] > TimeoutStartSec=0
	I0108 12:53:48.565472   10230 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0108 12:53:48.565476   10230 command_runner.go:130] > Delegate=yes
	I0108 12:53:48.565487   10230 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0108 12:53:48.565491   10230 command_runner.go:130] > KillMode=process
	I0108 12:53:48.565495   10230 command_runner.go:130] > [Install]
	I0108 12:53:48.565499   10230 command_runner.go:130] > WantedBy=multi-user.target
	I0108 12:53:48.566000   10230 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0108 12:53:48.566088   10230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 12:53:48.575878   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 12:53:48.590192   10230 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0108 12:53:48.590207   10230 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0108 12:53:48.591191   10230 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0108 12:53:48.658100   10230 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0108 12:53:48.731370   10230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 12:53:48.803470   10230 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 12:53:49.030629   10230 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0108 12:53:49.104116   10230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 12:53:49.181921   10230 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0108 12:53:49.191913   10230 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0108 12:53:49.191998   10230 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0108 12:53:49.195981   10230 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0108 12:53:49.195992   10230 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0108 12:53:49.196001   10230 command_runner.go:130] > Device: 10002eh/1048622d	Inode: 131         Links: 1
	I0108 12:53:49.196008   10230 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0108 12:53:49.196014   10230 command_runner.go:130] > Access: 2023-01-08 20:53:48.575911189 +0000
	I0108 12:53:49.196019   10230 command_runner.go:130] > Modify: 2023-01-08 20:53:48.474911183 +0000
	I0108 12:53:49.196026   10230 command_runner.go:130] > Change: 2023-01-08 20:53:48.482911184 +0000
	I0108 12:53:49.196030   10230 command_runner.go:130] >  Birth: -
	I0108 12:53:49.196052   10230 start.go:472] Will wait 60s for crictl version
	I0108 12:53:49.196103   10230 ssh_runner.go:195] Run: sudo crictl version
	I0108 12:53:49.223964   10230 command_runner.go:130] > Version:  0.1.0
	I0108 12:53:49.223977   10230 command_runner.go:130] > RuntimeName:  docker
	I0108 12:53:49.223994   10230 command_runner.go:130] > RuntimeVersion:  20.10.21
	I0108 12:53:49.224000   10230 command_runner.go:130] > RuntimeApiVersion:  1.41.0
	I0108 12:53:49.225763   10230 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.21
	RuntimeApiVersion:  1.41.0
	I0108 12:53:49.225851   10230 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 12:53:49.254079   10230 command_runner.go:130] > 20.10.21
	I0108 12:53:49.256407   10230 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 12:53:49.283828   10230 command_runner.go:130] > 20.10.21
	I0108 12:53:49.330593   10230 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
	I0108 12:53:49.352743   10230 out.go:177]   - env NO_PROXY=192.168.58.2
	I0108 12:53:49.374844   10230 cli_runner.go:164] Run: docker exec -t multinode-124908-m02 dig +short host.docker.internal
	I0108 12:53:49.481982   10230 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0108 12:53:49.482100   10230 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0108 12:53:49.486667   10230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 12:53:49.496963   10230 certs.go:54] Setting up /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908 for IP: 192.168.58.3
	I0108 12:53:49.497104   10230 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.key
	I0108 12:53:49.497166   10230 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.key
	I0108 12:53:49.497174   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 12:53:49.497203   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 12:53:49.497232   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 12:53:49.497253   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 12:53:49.497356   10230 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083.pem (1338 bytes)
	W0108 12:53:49.497397   10230 certs.go:384] ignoring /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083_empty.pem, impossibly tiny 0 bytes
	I0108 12:53:49.497409   10230 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 12:53:49.497451   10230 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem (1082 bytes)
	I0108 12:53:49.497507   10230 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem (1123 bytes)
	I0108 12:53:49.497544   10230 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem (1675 bytes)
	I0108 12:53:49.497620   10230 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem (1708 bytes)
	I0108 12:53:49.497661   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 12:53:49.497684   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083.pem -> /usr/share/ca-certificates/4083.pem
	I0108 12:53:49.497708   10230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem -> /usr/share/ca-certificates/40832.pem
	I0108 12:53:49.498056   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 12:53:49.516005   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 12:53:49.533665   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 12:53:49.551119   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 12:53:49.569154   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 12:53:49.587046   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083.pem --> /usr/share/ca-certificates/4083.pem (1338 bytes)
	I0108 12:53:49.604598   10230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem --> /usr/share/ca-certificates/40832.pem (1708 bytes)
	I0108 12:53:49.621738   10230 ssh_runner.go:195] Run: openssl version
	I0108 12:53:49.626940   10230 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0108 12:53:49.627225   10230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/40832.pem && ln -fs /usr/share/ca-certificates/40832.pem /etc/ssl/certs/40832.pem"
	I0108 12:53:49.635035   10230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/40832.pem
	I0108 12:53:49.638742   10230 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 20:32 /usr/share/ca-certificates/40832.pem
	I0108 12:53:49.638884   10230 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:32 /usr/share/ca-certificates/40832.pem
	I0108 12:53:49.638955   10230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/40832.pem
	I0108 12:53:49.644258   10230 command_runner.go:130] > 3ec20f2e
	I0108 12:53:49.644643   10230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/40832.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 12:53:49.652174   10230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 12:53:49.660230   10230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 12:53:49.664205   10230 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 20:27 /usr/share/ca-certificates/minikubeCA.pem
	I0108 12:53:49.664296   10230 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:27 /usr/share/ca-certificates/minikubeCA.pem
	I0108 12:53:49.664351   10230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 12:53:49.669882   10230 command_runner.go:130] > b5213941
	I0108 12:53:49.669944   10230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 12:53:49.677564   10230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4083.pem && ln -fs /usr/share/ca-certificates/4083.pem /etc/ssl/certs/4083.pem"
	I0108 12:53:49.685526   10230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4083.pem
	I0108 12:53:49.689557   10230 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 20:32 /usr/share/ca-certificates/4083.pem
	I0108 12:53:49.689677   10230 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:32 /usr/share/ca-certificates/4083.pem
	I0108 12:53:49.689728   10230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4083.pem
	I0108 12:53:49.694870   10230 command_runner.go:130] > 51391683
	I0108 12:53:49.695092   10230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4083.pem /etc/ssl/certs/51391683.0"
	I0108 12:53:49.703101   10230 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0108 12:53:49.770041   10230 command_runner.go:130] > systemd
	I0108 12:53:49.772286   10230 cni.go:95] Creating CNI manager for ""
	I0108 12:53:49.772301   10230 cni.go:156] 3 nodes found, recommending kindnet
	I0108 12:53:49.772315   10230 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 12:53:49.772334   10230 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-124908 NodeName:multinode-124908-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 12:53:49.772437   10230 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-124908-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 12:53:49.772490   10230 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-124908-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:multinode-124908 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 12:53:49.772562   10230 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0108 12:53:49.780011   10230 command_runner.go:130] > kubeadm
	I0108 12:53:49.780021   10230 command_runner.go:130] > kubectl
	I0108 12:53:49.780027   10230 command_runner.go:130] > kubelet
	I0108 12:53:49.780929   10230 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 12:53:49.780997   10230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0108 12:53:49.788388   10230 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (482 bytes)
	I0108 12:53:49.801312   10230 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 12:53:49.814916   10230 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0108 12:53:49.818820   10230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 12:53:49.828861   10230 host.go:66] Checking if "multinode-124908" exists ...
	I0108 12:53:49.829061   10230 config.go:180] Loaded profile config "multinode-124908": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 12:53:49.829055   10230 start.go:286] JoinCluster: &{Name:multinode-124908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-124908 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false
metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 12:53:49.829124   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0108 12:53:49.829194   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908
	I0108 12:53:49.889142   10230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51400 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/multinode-124908/id_rsa Username:docker}
	I0108 12:53:50.040274   10230 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f 
	I0108 12:53:50.040306   10230 start.go:299] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0108 12:53:50.040325   10230 host.go:66] Checking if "multinode-124908" exists ...
	I0108 12:53:50.040571   10230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl drain multinode-124908-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0108 12:53:50.040633   10230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908
	I0108 12:53:50.100892   10230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51400 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/multinode-124908/id_rsa Username:docker}
	I0108 12:53:50.225056   10230 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0108 12:53:50.250777   10230 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-4j92t, kube-system/kube-proxy-vx6bb
	I0108 12:53:53.263004   10230 command_runner.go:130] > node/multinode-124908-m02 cordoned
	I0108 12:53:53.263020   10230 command_runner.go:130] > pod "busybox-65db55d5d6-k6vhx" has DeletionTimestamp older than 1 seconds, skipping
	I0108 12:53:53.263026   10230 command_runner.go:130] > node/multinode-124908-m02 drained
	I0108 12:53:53.263043   10230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl drain multinode-124908-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.222497044s)
	I0108 12:53:53.263052   10230 node.go:109] successfully drained node "m02"
	I0108 12:53:53.263384   10230 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15565-2761/kubeconfig
	I0108 12:53:53.263599   10230 kapi.go:59] client config for multinode-124908: &rest.Config{Host:"https://127.0.0.1:51399", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/multinode-124908/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 12:53:53.263875   10230 request.go:1154] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0108 12:53:53.263902   10230 round_trippers.go:463] DELETE https://127.0.0.1:51399/api/v1/nodes/multinode-124908-m02
	I0108 12:53:53.263905   10230 round_trippers.go:469] Request Headers:
	I0108 12:53:53.263912   10230 round_trippers.go:473]     Accept: application/json, */*
	I0108 12:53:53.263918   10230 round_trippers.go:473]     Content-Type: application/json
	I0108 12:53:53.263923   10230 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0108 12:53:53.267249   10230 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 12:53:53.267263   10230 round_trippers.go:577] Response Headers:
	I0108 12:53:53.267270   10230 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 12:53:53.267275   10230 round_trippers.go:580]     Content-Type: application/json
	I0108 12:53:53.267279   10230 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6c5b2a17-66f0-413b-a678-a7d5b9656f3d
	I0108 12:53:53.267284   10230 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6db7318a-1927-4e40-9ded-8ab0b26fcd55
	I0108 12:53:53.267288   10230 round_trippers.go:580]     Content-Length: 171
	I0108 12:53:53.267294   10230 round_trippers.go:580]     Date: Sun, 08 Jan 2023 20:53:53 GMT
	I0108 12:53:53.267299   10230 round_trippers.go:580]     Audit-Id: 7c0acc42-7798-47f5-8d1d-5c238749fb6c
	I0108 12:53:53.267311   10230 request.go:1154] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-124908-m02","kind":"nodes","uid":"06778a45-7a2c-401b-918a-d4864150c87c"}}
	I0108 12:53:53.267342   10230 node.go:125] successfully deleted node "m02"
	I0108 12:53:53.267351   10230 start.go:303] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0108 12:53:53.267363   10230 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0108 12:53:53.267377   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02"
	I0108 12:53:53.338646   10230 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 12:53:53.449568   10230 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0108 12:53:53.449587   10230 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0108 12:53:53.467352   10230 command_runner.go:130] ! W0108 20:53:53.338191    1110 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0108 12:53:53.467368   10230 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0108 12:53:53.467389   10230 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0108 12:53:53.467396   10230 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0108 12:53:53.467402   10230 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0108 12:53:53.467412   10230 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0108 12:53:53.467424   10230 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0108 12:53:53.467451   10230 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0108 12:53:53.467481   10230 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0108 20:53:53.338191    1110 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0108 12:53:53.467490   10230 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I0108 12:53:53.467501   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I0108 12:53:53.509384   10230 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0108 12:53:53.509408   10230 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0108 12:53:53.509430   10230 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0108 12:53:53.509453   10230 retry.go:31] will retry after 11.04660288s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0108 20:53:53.338191    1110 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0108 12:54:04.556072   10230 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0108 12:54:04.556131   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02"
	I0108 12:54:04.594753   10230 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 12:54:04.694438   10230 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0108 12:54:04.694468   10230 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0108 12:54:04.710963   10230 command_runner.go:130] ! W0108 20:54:04.594092    1653 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0108 12:54:04.710979   10230 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0108 12:54:04.710986   10230 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0108 12:54:04.710992   10230 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0108 12:54:04.710998   10230 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0108 12:54:04.711004   10230 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0108 12:54:04.711013   10230 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0108 12:54:04.711025   10230 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0108 12:54:04.711053   10230 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0108 20:54:04.594092    1653 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0108 12:54:04.711062   10230 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I0108 12:54:04.711070   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I0108 12:54:04.750797   10230 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0108 12:54:04.750812   10230 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0108 12:54:04.750827   10230 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0108 12:54:04.750837   10230 retry.go:31] will retry after 21.607636321s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0108 20:54:04.594092    1653 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0108 12:54:26.358759   10230 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0108 12:54:26.358891   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02"
	I0108 12:54:26.398634   10230 command_runner.go:130] ! W0108 20:54:26.398222    1875 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0108 12:54:26.398654   10230 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0108 12:54:26.421450   10230 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0108 12:54:26.426704   10230 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0108 12:54:26.490593   10230 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0108 12:54:26.490608   10230 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0108 12:54:26.516291   10230 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0108 12:54:26.516305   10230 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0108 12:54:26.519234   10230 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 12:54:26.519247   10230 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0108 12:54:26.519254   10230 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0108 12:54:26.519284   10230 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0108 20:54:26.398222    1875 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0108 12:54:26.519293   10230 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I0108 12:54:26.519300   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I0108 12:54:26.558560   10230 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0108 12:54:26.558574   10230 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0108 12:54:26.558592   10230 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0108 12:54:26.558602   10230 retry.go:31] will retry after 26.202601198s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0108 20:54:26.398222    1875 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0108 12:54:52.760950   10230 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0108 12:54:52.761003   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02"
	I0108 12:54:52.797460   10230 command_runner.go:130] ! W0108 20:54:52.797019    2131 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0108 12:54:52.797474   10230 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0108 12:54:52.820523   10230 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0108 12:54:52.825992   10230 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0108 12:54:52.887640   10230 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0108 12:54:52.887658   10230 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0108 12:54:52.913384   10230 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0108 12:54:52.913402   10230 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0108 12:54:52.916556   10230 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 12:54:52.916569   10230 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0108 12:54:52.916576   10230 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0108 12:54:52.916621   10230 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0108 20:54:52.797019    2131 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0108 12:54:52.916639   10230 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I0108 12:54:52.916655   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I0108 12:54:52.956378   10230 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0108 12:54:52.956394   10230 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0108 12:54:52.956416   10230 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0108 12:54:52.956429   10230 retry.go:31] will retry after 31.647853817s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0108 20:54:52.797019    2131 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0108 12:55:24.605996   10230 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0108 12:55:24.606100   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02"
	I0108 12:55:24.645169   10230 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 12:55:24.744513   10230 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0108 12:55:24.744528   10230 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0108 12:55:24.761783   10230 command_runner.go:130] ! W0108 20:55:24.644734    2441 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0108 12:55:24.761803   10230 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0108 12:55:24.761811   10230 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0108 12:55:24.761824   10230 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0108 12:55:24.761830   10230 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0108 12:55:24.761835   10230 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0108 12:55:24.761844   10230 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0108 12:55:24.761850   10230 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0108 12:55:24.761882   10230 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0108 20:55:24.644734    2441 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0108 12:55:24.761890   10230 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I0108 12:55:24.761898   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I0108 12:55:24.800820   10230 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0108 12:55:24.800837   10230 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0108 12:55:24.800857   10230 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0108 12:55:24.800869   10230 retry.go:31] will retry after 46.809773289s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0108 20:55:24.644734    2441 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0108 12:56:11.610993   10230 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0108 12:56:11.611064   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02"
	I0108 12:56:11.650964   10230 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 12:56:11.755239   10230 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0108 12:56:11.755258   10230 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0108 12:56:11.772411   10230 command_runner.go:130] ! W0108 20:56:11.649972    2847 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0108 12:56:11.772426   10230 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0108 12:56:11.772440   10230 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0108 12:56:11.772445   10230 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0108 12:56:11.772450   10230 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0108 12:56:11.772458   10230 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0108 12:56:11.772467   10230 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0108 12:56:11.772472   10230 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0108 12:56:11.772512   10230 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0108 20:56:11.649972    2847 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0108 12:56:11.772523   10230 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I0108 12:56:11.772535   10230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I0108 12:56:11.812070   10230 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0108 12:56:11.812091   10230 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0108 12:56:11.812115   10230 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0108 12:56:11.812133   10230 start.go:288] JoinCluster complete in 2m21.984909549s
	I0108 12:56:11.834139   10230 out.go:177] 
	W0108 12:56:11.855243   10230 out.go:239] X Exiting due to GUEST_START: adding node: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bvqd44.xwvstem7lxt2btn4 --discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-124908-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0108 20:56:11.649972    2847 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-124908-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	W0108 12:56:11.855276   10230 out.go:239] * 
	W0108 12:56:11.856518   10230 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 12:56:11.919021   10230 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Logs begin at Sun 2023-01-08 20:52:49 UTC, end at Sun 2023-01-08 20:56:13 UTC. --
	Jan 08 20:52:52 multinode-124908 dockerd[130]: time="2023-01-08T20:52:52.215176683Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 08 20:52:52 multinode-124908 dockerd[130]: time="2023-01-08T20:52:52.215458954Z" level=info msg="Daemon shutdown complete"
	Jan 08 20:52:52 multinode-124908 systemd[1]: docker.service: Succeeded.
	Jan 08 20:52:52 multinode-124908 systemd[1]: Stopped Docker Application Container Engine.
	Jan 08 20:52:52 multinode-124908 systemd[1]: docker.service: Consumed 1.717s CPU time.
	Jan 08 20:52:52 multinode-124908 systemd[1]: Starting Docker Application Container Engine...
	Jan 08 20:52:52 multinode-124908 dockerd[676]: time="2023-01-08T20:52:52.263119521Z" level=info msg="Starting up"
	Jan 08 20:52:52 multinode-124908 dockerd[676]: time="2023-01-08T20:52:52.264789608Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 08 20:52:52 multinode-124908 dockerd[676]: time="2023-01-08T20:52:52.264826903Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 08 20:52:52 multinode-124908 dockerd[676]: time="2023-01-08T20:52:52.264855680Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 08 20:52:52 multinode-124908 dockerd[676]: time="2023-01-08T20:52:52.264867877Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 08 20:52:52 multinode-124908 dockerd[676]: time="2023-01-08T20:52:52.266139782Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 08 20:52:52 multinode-124908 dockerd[676]: time="2023-01-08T20:52:52.266176778Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 08 20:52:52 multinode-124908 dockerd[676]: time="2023-01-08T20:52:52.266189181Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 08 20:52:52 multinode-124908 dockerd[676]: time="2023-01-08T20:52:52.266199637Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 08 20:52:52 multinode-124908 dockerd[676]: time="2023-01-08T20:52:52.269866185Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jan 08 20:52:52 multinode-124908 dockerd[676]: time="2023-01-08T20:52:52.274884163Z" level=info msg="Loading containers: start."
	Jan 08 20:52:52 multinode-124908 dockerd[676]: time="2023-01-08T20:52:52.380767073Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 08 20:52:52 multinode-124908 dockerd[676]: time="2023-01-08T20:52:52.415538264Z" level=info msg="Loading containers: done."
	Jan 08 20:52:52 multinode-124908 dockerd[676]: time="2023-01-08T20:52:52.424075787Z" level=info msg="Docker daemon" commit=3056208 graphdriver(s)=overlay2 version=20.10.21
	Jan 08 20:52:52 multinode-124908 dockerd[676]: time="2023-01-08T20:52:52.424144354Z" level=info msg="Daemon has completed initialization"
	Jan 08 20:52:52 multinode-124908 systemd[1]: Started Docker Application Container Engine.
	Jan 08 20:52:52 multinode-124908 dockerd[676]: time="2023-01-08T20:52:52.446613954Z" level=info msg="API listen on [::]:2376"
	Jan 08 20:52:52 multinode-124908 dockerd[676]: time="2023-01-08T20:52:52.449343528Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 08 20:53:34 multinode-124908 dockerd[676]: time="2023-01-08T20:53:34.583543534Z" level=info msg="ignoring event" container=b52027490eabbf999ae0c88a1478c12a90c7be3ea5133af5cd205a4b99f4b15c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	e19efe9d12e09       6e38f40d628db                                                                                         2 minutes ago       Running             storage-provisioner       2                   6918d9ac08666
	c85b6f6e89fc2       d6e3e26021b60                                                                                         3 minutes ago       Running             kindnet-cni               1                   1d533af07cf5b
	b52027490eabb       6e38f40d628db                                                                                         3 minutes ago       Exited              storage-provisioner       1                   6918d9ac08666
	2f382710932f9       8c811b4aec35f                                                                                         3 minutes ago       Running             busybox                   1                   433951a855e08
	5fbdcc71a035c       5185b96f0becf                                                                                         3 minutes ago       Running             coredns                   1                   9414c38ff8f75
	4eaafbdd5df12       beaaf00edd38a                                                                                         3 minutes ago       Running             kube-proxy                1                   e564ec05f6633
	4298baae802d7       a8a176a5d5d69                                                                                         3 minutes ago       Running             etcd                      1                   661b8f0a03b62
	27f46a066b6fc       0346dbd74bcb9                                                                                         3 minutes ago       Running             kube-apiserver            1                   6df24c0f4a18e
	dc534cd603fc1       6d23ec0e8b87e                                                                                         3 minutes ago       Running             kube-scheduler            1                   4d7597f20a3e3
	234da5e06e15b       6039992312758                                                                                         3 minutes ago       Running             kube-controller-manager   1                   bdd608d10ffbd
	b79face19724f       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   5 minutes ago       Exited              busybox                   0                   bfb60246db042
	102afbd16ebea       5185b96f0becf                                                                                         6 minutes ago       Exited              coredns                   0                   87704622b4c0e
	5f5efd278d835       kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe              6 minutes ago       Exited              kindnet-cni               0                   c87fa6df09c35
	e8a051889a288       beaaf00edd38a                                                                                         6 minutes ago       Exited              kube-proxy                0                   e1fcc1a318f0f
	015d397fcc742       a8a176a5d5d69                                                                                         6 minutes ago       Exited              etcd                      0                   0f0a2ebaa1f80
	284f829458059       6039992312758                                                                                         6 minutes ago       Exited              kube-controller-manager   0                   a8533a49b21ad
	3af41681452ee       0346dbd74bcb9                                                                                         6 minutes ago       Exited              kube-apiserver            0                   56a7fc40cef99
	f321d9700124c       6d23ec0e8b87e                                                                                         6 minutes ago       Exited              kube-scheduler            0                   adaa05119a603
	
	* 
	* ==> coredns [102afbd16ebe] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [5fbdcc71a035] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-124908
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-124908
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286
	                    minikube.k8s.io/name=multinode-124908
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_08T12_49_36_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 08 Jan 2023 20:49:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-124908
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 08 Jan 2023 20:56:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 08 Jan 2023 20:53:01 +0000   Sun, 08 Jan 2023 20:49:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 08 Jan 2023 20:53:01 +0000   Sun, 08 Jan 2023 20:49:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 08 Jan 2023 20:53:01 +0000   Sun, 08 Jan 2023 20:49:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 08 Jan 2023 20:53:01 +0000   Sun, 08 Jan 2023 20:49:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-124908
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc065f8e2d1f42529ccfe18f8b887c8c
	  System UUID:                dc065f8e2d1f42529ccfe18f8b887c8c
	  Boot ID:                    77459c6d-45b1-4c6b-b47b-e80c0f7ff94f
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.21
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-65db55d5d6-2jztl                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m26s
	  kube-system                 coredns-565d847f94-f6gqj                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     6m26s
	  kube-system                 etcd-multinode-124908                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         6m39s
	  kube-system                 kindnet-79h6s                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      6m27s
	  kube-system                 kube-apiserver-multinode-124908             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m39s
	  kube-system                 kube-controller-manager-multinode-124908    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m38s
	  kube-system                 kube-proxy-kzv6k                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m27s
	  kube-system                 kube-scheduler-multinode-124908             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m39s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (3%!)(MISSING)  220Mi (3%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m24s                  kube-proxy       
	  Normal  Starting                 3m9s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  6m50s (x4 over 6m50s)  kubelet          Node multinode-124908 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m50s (x4 over 6m50s)  kubelet          Node multinode-124908 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m50s (x4 over 6m50s)  kubelet          Node multinode-124908 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    6m39s                  kubelet          Node multinode-124908 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  6m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m39s                  kubelet          Node multinode-124908 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     6m39s                  kubelet          Node multinode-124908 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m39s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           6m27s                  node-controller  Node multinode-124908 event: Registered Node multinode-124908 in Controller
	  Normal  NodeReady                6m17s                  kubelet          Node multinode-124908 status is now: NodeReady
	  Normal  Starting                 3m17s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m17s (x8 over 3m17s)  kubelet          Node multinode-124908 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m17s (x8 over 3m17s)  kubelet          Node multinode-124908 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m17s (x7 over 3m17s)  kubelet          Node multinode-124908 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m                     node-controller  Node multinode-124908 event: Registered Node multinode-124908 in Controller
	
	
	Name:               multinode-124908-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-124908-m02
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 08 Jan 2023 20:53:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-124908-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 08 Jan 2023 20:56:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 08 Jan 2023 20:53:53 +0000   Sun, 08 Jan 2023 20:53:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 08 Jan 2023 20:53:53 +0000   Sun, 08 Jan 2023 20:53:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 08 Jan 2023 20:53:53 +0000   Sun, 08 Jan 2023 20:53:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 08 Jan 2023 20:53:53 +0000   Sun, 08 Jan 2023 20:53:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-124908-m02
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc065f8e2d1f42529ccfe18f8b887c8c
	  System UUID:                dc065f8e2d1f42529ccfe18f8b887c8c
	  Boot ID:                    77459c6d-45b1-4c6b-b47b-e80c0f7ff94f
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.21
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4j92t       100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m49s
	  kube-system                 kube-proxy-vx6bb    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 5m42s                  kube-proxy  
	  Normal  Starting                 2m17s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  5m49s (x8 over 6m2s)   kubelet     Node multinode-124908-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m49s (x8 over 6m2s)   kubelet     Node multinode-124908-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m27s                  kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m27s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m21s (x7 over 2m27s)  kubelet     Node multinode-124908-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m21s (x7 over 2m27s)  kubelet     Node multinode-124908-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m21s (x7 over 2m27s)  kubelet     Node multinode-124908-m02 status is now: NodeHasSufficientPID
	
	
	Name:               multinode-124908-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-124908-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 08 Jan 2023 20:51:58 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-124908-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 08 Jan 2023 20:52:07 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 08 Jan 2023 20:52:08 +0000   Sun, 08 Jan 2023 20:53:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 08 Jan 2023 20:52:08 +0000   Sun, 08 Jan 2023 20:53:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 08 Jan 2023 20:52:08 +0000   Sun, 08 Jan 2023 20:53:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 08 Jan 2023 20:52:08 +0000   Sun, 08 Jan 2023 20:53:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.58.4
	  Hostname:    multinode-124908-m03
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc065f8e2d1f42529ccfe18f8b887c8c
	  System UUID:                dc065f8e2d1f42529ccfe18f8b887c8c
	  Boot ID:                    77459c6d-45b1-4c6b-b47b-e80c0f7ff94f
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.21
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-65db55d5d6-dvbn2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m24s
	  kube-system                 kindnet-pj4l5               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m5s
	  kube-system                 kube-proxy-hq6ms            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m59s                  kube-proxy       
	  Normal  Starting                 4m13s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m6s (x2 over 5m6s)    kubelet          Node multinode-124908-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  5m6s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m6s                   kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    5m6s (x2 over 5m6s)    kubelet          Node multinode-124908-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m6s (x2 over 5m6s)    kubelet          Node multinode-124908-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m55s                  kubelet          Node multinode-124908-m03 status is now: NodeReady
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m17s (x2 over 4m17s)  kubelet          Node multinode-124908-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s (x2 over 4m17s)  kubelet          Node multinode-124908-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s (x2 over 4m17s)  kubelet          Node multinode-124908-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m12s                  node-controller  Node multinode-124908-m03 event: Registered Node multinode-124908-m03 in Controller
	  Normal  NodeReady                4m6s                   kubelet          Node multinode-124908-m03 status is now: NodeReady
	  Normal  RegisteredNode           3m                     node-controller  Node multinode-124908-m03 event: Registered Node multinode-124908-m03 in Controller
	  Normal  NodeNotReady             2m20s                  node-controller  Node multinode-124908-m03 status is now: NodeNotReady
	
	* 
	* ==> dmesg <==
	* [  +0.000048] FS-Cache: O-key=[8] 'a1cf8c0500000000'
	[  +0.000039] FS-Cache: N-cookie c=0000000d [p=00000005 fl=2 nc=0 na=1]
	[  +0.000043] FS-Cache: N-cookie d=00000000c8eceefa{9p.inode} n=00000000de50c6c0
	[  +0.000171] FS-Cache: N-key=[8] 'a1cf8c0500000000'
	[  +0.001743] FS-Cache: Duplicate cookie detected
	[  +0.000031] FS-Cache: O-cookie c=00000007 [p=00000005 fl=226 nc=0 na=1]
	[  +0.000054] FS-Cache: O-cookie d=00000000c8eceefa{9p.inode} n=00000000f7b4829b
	[  +0.000182] FS-Cache: O-key=[8] 'a1cf8c0500000000'
	[  +0.000077] FS-Cache: N-cookie c=0000000e [p=00000005 fl=2 nc=0 na=1]
	[  +0.000070] FS-Cache: N-cookie d=00000000c8eceefa{9p.inode} n=00000000bb70f1a6
	[  +0.000102] FS-Cache: N-key=[8] 'a1cf8c0500000000'
	[  +3.200941] FS-Cache: Duplicate cookie detected
	[  +0.000038] FS-Cache: O-cookie c=00000008 [p=00000005 fl=226 nc=0 na=1]
	[  +0.000038] FS-Cache: O-cookie d=00000000c8eceefa{9p.inode} n=0000000058ef753f
	[  +0.000051] FS-Cache: O-key=[8] 'a0cf8c0500000000'
	[  +0.000036] FS-Cache: N-cookie c=00000011 [p=00000005 fl=2 nc=0 na=1]
	[  +0.000045] FS-Cache: N-cookie d=00000000c8eceefa{9p.inode} n=00000000bb70f1a6
	[  +0.000055] FS-Cache: N-key=[8] 'a0cf8c0500000000'
	[  +0.662289] FS-Cache: Duplicate cookie detected
	[  +0.000034] FS-Cache: O-cookie c=0000000b [p=00000005 fl=226 nc=0 na=1]
	[  +0.000052] FS-Cache: O-cookie d=00000000c8eceefa{9p.inode} n=000000007d517ac1
	[  +0.000173] FS-Cache: O-key=[8] 'becf8c0500000000'
	[  +0.000039] FS-Cache: N-cookie c=00000012 [p=00000005 fl=2 nc=0 na=1]
	[  +0.000044] FS-Cache: N-cookie d=00000000c8eceefa{9p.inode} n=00000000c94e52af
	[  +0.000088] FS-Cache: N-key=[8] 'becf8c0500000000'
	
	* 
	* ==> etcd [015d397fcc74] <==
	* {"level":"info","ts":"2023-01-08T20:49:30.835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-01-08T20:49:30.835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-01-08T20:49:30.835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-01-08T20:49:30.835Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-124908 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-08T20:49:30.835Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-08T20:49:30.836Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-08T20:49:30.836Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-08T20:49:30.836Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-08T20:49:30.836Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-08T20:49:30.837Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-01-08T20:49:30.839Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T20:49:30.840Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T20:49:30.840Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T20:49:30.840Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T20:50:59.466Z","caller":"traceutil/trace.go:171","msg":"trace[1968029918] linearizableReadLoop","detail":"{readStateIndex:560; appliedIndex:560; }","duration":"238.529115ms","start":"2023-01-08T20:50:59.227Z","end":"2023-01-08T20:50:59.465Z","steps":["trace[1968029918] 'read index received'  (duration: 238.501724ms)","trace[1968029918] 'applied index is now lower than readState.Index'  (duration: 26.384µs)"],"step_count":2}
	{"level":"warn","ts":"2023-01-08T20:50:59.468Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"241.348822ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-01-08T20:50:59.468Z","caller":"traceutil/trace.go:171","msg":"trace[1155458378] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:531; }","duration":"241.464626ms","start":"2023-01-08T20:50:59.227Z","end":"2023-01-08T20:50:59.468Z","steps":["trace[1155458378] 'agreement among raft nodes before linearized reading'  (duration: 238.866616ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-08T20:52:12.246Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-01-08T20:52:12.246Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"multinode-124908","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	WARNING: 2023/01/08 20:52:12 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2023/01/08 20:52:12 [core] grpc: addrConn.createTransport failed to connect to {192.168.58.2:2379 192.168.58.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.58.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2023-01-08T20:52:12.260Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b2c6679ac05f2cf1","current-leader-member-id":"b2c6679ac05f2cf1"}
	{"level":"info","ts":"2023-01-08T20:52:12.261Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-01-08T20:52:12.263Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-01-08T20:52:12.263Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"multinode-124908","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	
	* 
	* ==> etcd [4298baae802d] <==
	* {"level":"info","ts":"2023-01-08T20:52:58.745Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"b2c6679ac05f2cf1","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-01-08T20:52:58.745Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-01-08T20:52:58.746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-01-08T20:52:58.746Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-01-08T20:52:58.746Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T20:52:58.746Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T20:52:58.747Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-01-08T20:52:58.747Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-01-08T20:52:58.747Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-08T20:52:58.747Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-01-08T20:52:58.747Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-01-08T20:52:59.958Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 2"}
	{"level":"info","ts":"2023-01-08T20:52:59.959Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-01-08T20:52:59.959Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-01-08T20:52:59.959Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 3"}
	{"level":"info","ts":"2023-01-08T20:52:59.959Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2023-01-08T20:52:59.959Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 3"}
	{"level":"info","ts":"2023-01-08T20:52:59.959Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2023-01-08T20:52:59.961Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-124908 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-08T20:52:59.962Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-08T20:52:59.962Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-08T20:52:59.962Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-08T20:52:59.962Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-08T20:52:59.963Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-08T20:52:59.963Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	
	* 
	* ==> kernel <==
	*  20:56:14 up 55 min,  0 users,  load average: 0.25, 0.63, 0.80
	Linux multinode-124908 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [27f46a066b6f] <==
	* I0108 20:53:01.521654       1 controller.go:85] Starting OpenAPI V3 controller
	I0108 20:53:01.521706       1 naming_controller.go:291] Starting NamingConditionController
	I0108 20:53:01.521722       1 establishing_controller.go:76] Starting EstablishingController
	I0108 20:53:01.521730       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0108 20:53:01.521739       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0108 20:53:01.521747       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0108 20:53:01.537552       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0108 20:53:01.538644       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0108 20:53:01.555737       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0108 20:53:01.587982       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0108 20:53:01.588136       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0108 20:53:01.588352       1 cache.go:39] Caches are synced for autoregister controller
	I0108 20:53:01.588805       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0108 20:53:01.609229       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I0108 20:53:01.609270       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0108 20:53:01.610854       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 20:53:02.319862       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0108 20:53:02.491690       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0108 20:53:04.567090       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0108 20:53:04.772006       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0108 20:53:04.779939       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0108 20:53:04.870825       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 20:53:04.874973       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0108 20:53:14.478792       1 controller.go:616] quota admission added evaluator for: endpoints
	I0108 20:53:14.480439       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [3af41681452e] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0108 20:52:22.140732       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0108 20:52:22.180951       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0108 20:52:22.181021       1 logging.go:59] [core] [Channel #13 SubChannel #14] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [234da5e06e15] <==
	* I0108 20:53:14.472590       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0108 20:53:14.474335       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0108 20:53:14.474426       1 shared_informer.go:262] Caches are synced for namespace
	I0108 20:53:14.475847       1 shared_informer.go:262] Caches are synced for daemon sets
	I0108 20:53:14.476906       1 shared_informer.go:262] Caches are synced for GC
	I0108 20:53:14.479162       1 shared_informer.go:262] Caches are synced for stateful set
	I0108 20:53:14.481421       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0108 20:53:14.537963       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0108 20:53:14.562153       1 shared_informer.go:262] Caches are synced for attach detach
	I0108 20:53:14.647111       1 shared_informer.go:262] Caches are synced for disruption
	I0108 20:53:14.654988       1 shared_informer.go:262] Caches are synced for resource quota
	I0108 20:53:14.665546       1 shared_informer.go:262] Caches are synced for deployment
	I0108 20:53:14.676210       1 shared_informer.go:262] Caches are synced for resource quota
	I0108 20:53:14.994320       1 shared_informer.go:262] Caches are synced for garbage collector
	I0108 20:53:15.041073       1 shared_informer.go:262] Caches are synced for garbage collector
	I0108 20:53:15.041180       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0108 20:53:50.257616       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-dvbn2"
	W0108 20:53:53.267409       1 topologycache.go:199] Can't get CPU or zone information for multinode-124908-m03 node
	W0108 20:53:53.413944       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-124908-m02" does not exist
	W0108 20:53:53.414217       1 topologycache.go:199] Can't get CPU or zone information for multinode-124908-m02 node
	I0108 20:53:53.417274       1 range_allocator.go:367] Set node multinode-124908-m02 PodCIDR to [10.244.1.0/24]
	I0108 20:53:54.470575       1 event.go:294] "Event occurred" object="multinode-124908-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-124908-m03 status is now: NodeNotReady"
	W0108 20:53:54.470614       1 topologycache.go:199] Can't get CPU or zone information for multinode-124908-m02 node
	I0108 20:53:54.473815       1 event.go:294] "Event occurred" object="kube-system/kindnet-pj4l5" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0108 20:53:54.477481       1 event.go:294] "Event occurred" object="kube-system/kube-proxy-hq6ms" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	* 
	* ==> kube-controller-manager [284f82945805] <==
	* W0108 20:50:25.515244       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-124908-m02" does not exist
	I0108 20:50:25.518755       1 range_allocator.go:367] Set node multinode-124908-m02 PodCIDR to [10.244.1.0/24]
	I0108 20:50:25.521101       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-vx6bb"
	I0108 20:50:25.524837       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-4j92t"
	W0108 20:50:27.749212       1 node_lifecycle_controller.go:1058] Missing timestamp for Node multinode-124908-m02. Assuming now as a timestamp.
	I0108 20:50:27.749398       1 event.go:294] "Event occurred" object="multinode-124908-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-124908-m02 event: Registered Node multinode-124908-m02 in Controller"
	W0108 20:50:45.821231       1 topologycache.go:199] Can't get CPU or zone information for multinode-124908-m02 node
	I0108 20:50:48.487080       1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-65db55d5d6 to 2"
	I0108 20:50:48.492375       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-k6vhx"
	I0108 20:50:48.536778       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-2jztl"
	W0108 20:51:09.247300       1 topologycache.go:199] Can't get CPU or zone information for multinode-124908-m02 node
	W0108 20:51:09.247369       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-124908-m03" does not exist
	I0108 20:51:09.252418       1 range_allocator.go:367] Set node multinode-124908-m03 PodCIDR to [10.244.2.0/24]
	I0108 20:51:09.254449       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-pj4l5"
	I0108 20:51:09.256584       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-hq6ms"
	W0108 20:51:12.761849       1 node_lifecycle_controller.go:1058] Missing timestamp for Node multinode-124908-m03. Assuming now as a timestamp.
	I0108 20:51:12.761924       1 event.go:294] "Event occurred" object="multinode-124908-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-124908-m03 event: Registered Node multinode-124908-m03 in Controller"
	W0108 20:51:19.575695       1 topologycache.go:199] Can't get CPU or zone information for multinode-124908-m02 node
	W0108 20:51:57.442324       1 topologycache.go:199] Can't get CPU or zone information for multinode-124908-m02 node
	I0108 20:51:57.811459       1 event.go:294] "Event occurred" object="multinode-124908-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-124908-m03 event: Removing Node multinode-124908-m03 from Controller"
	W0108 20:51:58.257477       1 topologycache.go:199] Can't get CPU or zone information for multinode-124908-m02 node
	W0108 20:51:58.257527       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-124908-m03" does not exist
	I0108 20:51:58.261599       1 range_allocator.go:367] Set node multinode-124908-m03 PodCIDR to [10.244.3.0/24]
	I0108 20:52:02.817967       1 event.go:294] "Event occurred" object="multinode-124908-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-124908-m03 event: Registered Node multinode-124908-m03 in Controller"
	W0108 20:52:08.324068       1 topologycache.go:199] Can't get CPU or zone information for multinode-124908-m02 node
	
	* 
	* ==> kube-proxy [4eaafbdd5df1] <==
	* I0108 20:53:04.256935       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0108 20:53:04.257030       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0108 20:53:04.257054       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0108 20:53:04.369855       1 server_others.go:206] "Using iptables Proxier"
	I0108 20:53:04.369945       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0108 20:53:04.369955       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0108 20:53:04.370008       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0108 20:53:04.370022       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0108 20:53:04.370140       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0108 20:53:04.370262       1 server.go:661] "Version info" version="v1.25.3"
	I0108 20:53:04.370268       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 20:53:04.370857       1 config.go:317] "Starting service config controller"
	I0108 20:53:04.370865       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0108 20:53:04.371471       1 config.go:226] "Starting endpoint slice config controller"
	I0108 20:53:04.371480       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0108 20:53:04.371590       1 config.go:444] "Starting node config controller"
	I0108 20:53:04.371600       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0108 20:53:04.472039       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0108 20:53:04.472053       1 shared_informer.go:262] Caches are synced for service config
	I0108 20:53:04.539297       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-proxy [e8a051889a28] <==
	* I0108 20:49:49.616839       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0108 20:49:49.616920       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0108 20:49:49.617040       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0108 20:49:49.647350       1 server_others.go:206] "Using iptables Proxier"
	I0108 20:49:49.647405       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0108 20:49:49.647413       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0108 20:49:49.647475       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0108 20:49:49.647493       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0108 20:49:49.647733       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0108 20:49:49.648606       1 server.go:661] "Version info" version="v1.25.3"
	I0108 20:49:49.648711       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 20:49:49.649515       1 config.go:317] "Starting service config controller"
	I0108 20:49:49.649554       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0108 20:49:49.649666       1 config.go:444] "Starting node config controller"
	I0108 20:49:49.649676       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0108 20:49:49.697379       1 config.go:226] "Starting endpoint slice config controller"
	I0108 20:49:49.697484       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0108 20:49:49.750640       1 shared_informer.go:262] Caches are synced for service config
	I0108 20:49:49.798321       1 shared_informer.go:262] Caches are synced for node config
	I0108 20:49:49.798380       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [dc534cd603fc] <==
	* I0108 20:52:59.266442       1 serving.go:348] Generated self-signed cert in-memory
	W0108 20:53:01.514835       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0108 20:53:01.514914       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 20:53:01.514932       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0108 20:53:01.514970       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0108 20:53:01.539749       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I0108 20:53:01.539910       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 20:53:01.541002       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0108 20:53:01.541164       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0108 20:53:01.541247       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0108 20:53:01.541277       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0108 20:53:01.641691       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [f321d9700124] <==
	* E0108 20:49:32.611495       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0108 20:49:32.611614       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 20:49:32.611626       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 20:49:32.611620       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0108 20:49:32.611654       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0108 20:49:32.611790       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 20:49:32.612106       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0108 20:49:32.611820       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 20:49:32.612159       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 20:49:32.612244       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 20:49:32.612252       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0108 20:49:33.479057       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 20:49:33.479114       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 20:49:33.595723       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 20:49:33.595784       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0108 20:49:33.641861       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 20:49:33.641906       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0108 20:49:33.794713       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 20:49:33.794909       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0108 20:49:33.796258       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0108 20:49:33.796516       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0108 20:49:36.208656       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0108 20:52:12.247910       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	E0108 20:52:12.248038       1 scheduling_queue.go:963] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0108 20:52:12.248103       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sun 2023-01-08 20:52:49 UTC, end at Sun 2023-01-08 20:56:15 UTC. --
	Jan 08 20:53:02 multinode-124908 kubelet[1226]: I0108 20:53:02.390768    1226 topology_manager.go:205] "Topology Admit Handler"
	Jan 08 20:53:02 multinode-124908 kubelet[1226]: I0108 20:53:02.390808    1226 topology_manager.go:205] "Topology Admit Handler"
	Jan 08 20:53:02 multinode-124908 kubelet[1226]: I0108 20:53:02.390856    1226 topology_manager.go:205] "Topology Admit Handler"
	Jan 08 20:53:02 multinode-124908 kubelet[1226]: I0108 20:53:02.474775    1226 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-856cp\" (UniqueName: \"kubernetes.io/projected/05a4b261-aa83-4e23-83c6-0a50d659b5b7-kube-api-access-856cp\") pod \"kube-proxy-kzv6k\" (UID: \"05a4b261-aa83-4e23-83c6-0a50d659b5b7\") " pod="kube-system/kube-proxy-kzv6k"
	Jan 08 20:53:02 multinode-124908 kubelet[1226]: I0108 20:53:02.474857    1226 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7-config-volume\") pod \"coredns-565d847f94-f6gqj\" (UID: \"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7\") " pod="kube-system/coredns-565d847f94-f6gqj"
	Jan 08 20:53:02 multinode-124908 kubelet[1226]: I0108 20:53:02.474882    1226 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r45p\" (UniqueName: \"kubernetes.io/projected/6eda9f8e-814b-4a17-9ec8-89bd52973d7b-kube-api-access-9r45p\") pod \"storage-provisioner\" (UID: \"6eda9f8e-814b-4a17-9ec8-89bd52973d7b\") " pod="kube-system/storage-provisioner"
	Jan 08 20:53:02 multinode-124908 kubelet[1226]: I0108 20:53:02.474905    1226 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6eda9f8e-814b-4a17-9ec8-89bd52973d7b-tmp\") pod \"storage-provisioner\" (UID: \"6eda9f8e-814b-4a17-9ec8-89bd52973d7b\") " pod="kube-system/storage-provisioner"
	Jan 08 20:53:02 multinode-124908 kubelet[1226]: I0108 20:53:02.474925    1226 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8899610c-9df6-488d-af2f-2848f1ce546b-cni-cfg\") pod \"kindnet-79h6s\" (UID: \"8899610c-9df6-488d-af2f-2848f1ce546b\") " pod="kube-system/kindnet-79h6s"
	Jan 08 20:53:02 multinode-124908 kubelet[1226]: I0108 20:53:02.474946    1226 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8899610c-9df6-488d-af2f-2848f1ce546b-xtables-lock\") pod \"kindnet-79h6s\" (UID: \"8899610c-9df6-488d-af2f-2848f1ce546b\") " pod="kube-system/kindnet-79h6s"
	Jan 08 20:53:02 multinode-124908 kubelet[1226]: I0108 20:53:02.474966    1226 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/05a4b261-aa83-4e23-83c6-0a50d659b5b7-kube-proxy\") pod \"kube-proxy-kzv6k\" (UID: \"05a4b261-aa83-4e23-83c6-0a50d659b5b7\") " pod="kube-system/kube-proxy-kzv6k"
	Jan 08 20:53:02 multinode-124908 kubelet[1226]: I0108 20:53:02.474986    1226 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9wjc\" (UniqueName: \"kubernetes.io/projected/1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7-kube-api-access-m9wjc\") pod \"coredns-565d847f94-f6gqj\" (UID: \"1acae42f-23f0-4fd2-bd5d-2bdeb2f745d7\") " pod="kube-system/coredns-565d847f94-f6gqj"
	Jan 08 20:53:02 multinode-124908 kubelet[1226]: I0108 20:53:02.475014    1226 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05a4b261-aa83-4e23-83c6-0a50d659b5b7-xtables-lock\") pod \"kube-proxy-kzv6k\" (UID: \"05a4b261-aa83-4e23-83c6-0a50d659b5b7\") " pod="kube-system/kube-proxy-kzv6k"
	Jan 08 20:53:02 multinode-124908 kubelet[1226]: I0108 20:53:02.475080    1226 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05a4b261-aa83-4e23-83c6-0a50d659b5b7-lib-modules\") pod \"kube-proxy-kzv6k\" (UID: \"05a4b261-aa83-4e23-83c6-0a50d659b5b7\") " pod="kube-system/kube-proxy-kzv6k"
	Jan 08 20:53:02 multinode-124908 kubelet[1226]: I0108 20:53:02.475181    1226 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tw6b\" (UniqueName: \"kubernetes.io/projected/8899610c-9df6-488d-af2f-2848f1ce546b-kube-api-access-4tw6b\") pod \"kindnet-79h6s\" (UID: \"8899610c-9df6-488d-af2f-2848f1ce546b\") " pod="kube-system/kindnet-79h6s"
	Jan 08 20:53:02 multinode-124908 kubelet[1226]: I0108 20:53:02.475259    1226 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnjdh\" (UniqueName: \"kubernetes.io/projected/3e22c8fd-4600-4b82-bd68-0886c8a289ab-kube-api-access-rnjdh\") pod \"busybox-65db55d5d6-2jztl\" (UID: \"3e22c8fd-4600-4b82-bd68-0886c8a289ab\") " pod="default/busybox-65db55d5d6-2jztl"
	Jan 08 20:53:02 multinode-124908 kubelet[1226]: I0108 20:53:02.475288    1226 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8899610c-9df6-488d-af2f-2848f1ce546b-lib-modules\") pod \"kindnet-79h6s\" (UID: \"8899610c-9df6-488d-af2f-2848f1ce546b\") " pod="kube-system/kindnet-79h6s"
	Jan 08 20:53:02 multinode-124908 kubelet[1226]: I0108 20:53:02.475317    1226 reconciler.go:169] "Reconciler: start to sync state"
	Jan 08 20:53:03 multinode-124908 kubelet[1226]: I0108 20:53:03.637854    1226 request.go:682] Waited for 1.060362585s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/kindnet/token
	Jan 08 20:53:04 multinode-124908 kubelet[1226]: I0108 20:53:04.243463    1226 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="6918d9ac08666897500e6316950f86c8ac752968e3005bda29d9b9f711516bd9"
	Jan 08 20:53:05 multinode-124908 kubelet[1226]: I0108 20:53:05.486599    1226 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Jan 08 20:53:09 multinode-124908 kubelet[1226]: I0108 20:53:09.742545    1226 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Jan 08 20:53:34 multinode-124908 kubelet[1226]: I0108 20:53:34.717979    1226 scope.go:115] "RemoveContainer" containerID="0fdc50ce7b7badf55e3243b3e715f9a28f4d0945c30a8eeab113ed6e72522345"
	Jan 08 20:53:34 multinode-124908 kubelet[1226]: I0108 20:53:34.718188    1226 scope.go:115] "RemoveContainer" containerID="b52027490eabbf999ae0c88a1478c12a90c7be3ea5133af5cd205a4b99f4b15c"
	Jan 08 20:53:34 multinode-124908 kubelet[1226]: E0108 20:53:34.718333    1226 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(6eda9f8e-814b-4a17-9ec8-89bd52973d7b)\"" pod="kube-system/storage-provisioner" podUID=6eda9f8e-814b-4a17-9ec8-89bd52973d7b
	Jan 08 20:53:46 multinode-124908 kubelet[1226]: I0108 20:53:46.560956    1226 scope.go:115] "RemoveContainer" containerID="b52027490eabbf999ae0c88a1478c12a90c7be3ea5133af5cd205a4b99f4b15c"
	
	* 
	* ==> storage-provisioner [b52027490eab] <==
	* I0108 20:53:04.565949       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0108 20:53:34.567271       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [e19efe9d12e0] <==
	* I0108 20:53:46.668251       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 20:53:46.678043       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 20:53:46.678150       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 20:54:04.072318       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 20:54:04.072427       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a894e39a-d2a2-426b-84da-eb240a1eee1e", APIVersion:"v1", ResourceVersion:"887", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-124908_28fc5358-bd26-4729-aba9-8c13f089ae02 became leader
	I0108 20:54:04.072441       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-124908_28fc5358-bd26-4729-aba9-8c13f089ae02!
	I0108 20:54:04.172942       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-124908_28fc5358-bd26-4729-aba9-8c13f089ae02!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-124908 -n multinode-124908
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-124908 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox-65db55d5d6-dvbn2
helpers_test.go:272: ======> post-mortem[TestMultiNode/serial/RestartKeepsNodes]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context multinode-124908 describe pod busybox-65db55d5d6-dvbn2
helpers_test.go:280: (dbg) kubectl --context multinode-124908 describe pod busybox-65db55d5d6-dvbn2:

                                                
                                                
-- stdout --
	Name:             busybox-65db55d5d6-dvbn2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             multinode-124908-m03/
	Labels:           app=busybox
	                  pod-template-hash=65db55d5d6
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-65db55d5d6
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xdcck (ro)
	Conditions:
	  Type           Status
	  PodScheduled   True 
	Volumes:
	  kube-api-access-xdcck:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  2m26s  default-scheduler  Successfully assigned default/busybox-65db55d5d6-dvbn2 to multinode-124908-m03

                                                
                                                
-- /stdout --
helpers_test.go:283: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:284: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (245.53s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (68.95s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3443235674.exe start -p running-upgrade-130723 --memory=2200 --vm-driver=docker 
E0108 13:08:02.625245    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3443235674.exe start -p running-upgrade-130723 --memory=2200 --vm-driver=docker : exit status 70 (53.72972848s)

                                                
                                                
-- stdout --
	! [running-upgrade-130723] minikube v1.9.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig3143985608
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-08 21:07:57.224205686 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "running-upgrade-130723" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-08 21:08:16.623126074 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p running-upgrade-130723", then "minikube start -p running-upgrade-130723 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.28.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.28.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 155.05 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 1.55 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 11.58 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 24.83 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 38.12 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 51.45 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 63.97 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 77.95 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 90.95 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 104.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 117.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 130.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 144.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 157.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 170.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 184.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 197.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 210.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 220.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 233.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 246.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 259.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 272.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 285.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 299.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 312.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 323.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 331.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 344.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 358.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 371.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 384.37 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 397.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 411.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 424.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 437.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 450.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 464.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 477.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 490.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 504.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 517.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 531.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 540.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-08 21:08:16.623126074 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3443235674.exe start -p running-upgrade-130723 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3443235674.exe start -p running-upgrade-130723 --memory=2200 --vm-driver=docker : exit status 70 (4.588885536s)

                                                
                                                
-- stdout --
	* [running-upgrade-130723] minikube v1.9.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig1386632536
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-130723" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3443235674.exe start -p running-upgrade-130723 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3443235674.exe start -p running-upgrade-130723 --memory=2200 --vm-driver=docker : exit status 70 (4.441326634s)

                                                
                                                
-- stdout --
	* [running-upgrade-130723] minikube v1.9.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig492201680
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-130723" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:133: legacy v1.9.0 start failed: exit status 70
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-01-08 13:08:29.822692 -0800 PST m=+2506.878574009
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-130723
helpers_test.go:235: (dbg) docker inspect running-upgrade-130723:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cce10a2c849ea0e8f7b385ff7b679c8869b5eb0a2b0e8f02263e10845bc90c8b",
	        "Created": "2023-01-08T21:08:05.440678386Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 154046,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-08T21:08:05.668642034Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/cce10a2c849ea0e8f7b385ff7b679c8869b5eb0a2b0e8f02263e10845bc90c8b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cce10a2c849ea0e8f7b385ff7b679c8869b5eb0a2b0e8f02263e10845bc90c8b/hostname",
	        "HostsPath": "/var/lib/docker/containers/cce10a2c849ea0e8f7b385ff7b679c8869b5eb0a2b0e8f02263e10845bc90c8b/hosts",
	        "LogPath": "/var/lib/docker/containers/cce10a2c849ea0e8f7b385ff7b679c8869b5eb0a2b0e8f02263e10845bc90c8b/cce10a2c849ea0e8f7b385ff7b679c8869b5eb0a2b0e8f02263e10845bc90c8b-json.log",
	        "Name": "/running-upgrade-130723",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-130723:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/931256100d8e65b23ba64c4b9b4f9a56924751e9b3fd9996df81f1c691ee085b-init/diff:/var/lib/docker/overlay2/4339d0aef19b9e82156ed6afc0a47cc902fc7e9bf83087995128f2a07d2fd454/diff:/var/lib/docker/overlay2/d303941d115ffe958237f9f06597edd68b611d67f9a6d7a68b49f940b9a677e3/diff:/var/lib/docker/overlay2/6cbdf392e08105ea38ca83eca9e4da63a60e0073e49cf651f74cbdd31cae6dfc/diff:/var/lib/docker/overlay2/eb032dc3deff7e35843c9c958de7b67a4f949d2eb7550b30a6c383a28df69f68/diff:/var/lib/docker/overlay2/4729fa7b65cffb7556a1a432696949070f56a3e1709e942535e444ace41b7666/diff:/var/lib/docker/overlay2/8c50910932494f597346d37455e3f630b229a8b95381110da09c900f680e486d/diff:/var/lib/docker/overlay2/3fc62bffebce434327f6be9d4d68b030866e9b1b64f54ebd2dae7556275d7987/diff:/var/lib/docker/overlay2/791589ce01828c9fc12cd784310077fb88a0444738f266d4670d719d06e2b35d/diff:/var/lib/docker/overlay2/bdd8a36c4ab4740f2397cc074ad49bcafe8f3eb5907ee1acf9e79810e97ba44c/diff:/var/lib/docker/overlay2/4f0a94
f7f31b44d6b938b58ade4036241092f4f0cb39866054e5b845d514ae56/diff:/var/lib/docker/overlay2/d03fa159dc87ca20f9df79269ff41bcc822210e05df03d7f03daf8db97547f84/diff:/var/lib/docker/overlay2/ffb7dbfd87953e32509c9d88b2eed2f9e11e3c0c54346fcd320d63a9ae146adf/diff:/var/lib/docker/overlay2/9437b6153164db7345df3671c23cca8139f04180c381bfc8e5410593b1040b6d/diff:/var/lib/docker/overlay2/79c6ca63b86d57f8e869dd786d4708901808e8e2c6fc7032ccec4243014477d7/diff:/var/lib/docker/overlay2/61c78013698167262d184b0a246b42f98492bd17e1a447d5e678e78876f4bb32/diff:/var/lib/docker/overlay2/15afe3cbc4db00efef19ecae369bc70e33665459c64a90e981ecf683006d4000/diff:/var/lib/docker/overlay2/0ecfd946d3c53fda8be276543dc6b5d9558fb7090ce8d595afcdbd40da41e8ad/diff:/var/lib/docker/overlay2/c8632b1729b92fe4889110620fe2c174cffd28959a3c399ffe39d4ea83603eb2/diff:/var/lib/docker/overlay2/d6ec0093d0f478c677a422019670b6b0e2a56d7003fce172ff797cdd0949ee29/diff:/var/lib/docker/overlay2/752e36fa2214ba6ea532ce2d18b5a7018dcd32353755dce50b86190321d637ea/diff:/var/lib/d
ocker/overlay2/1fad0941cf22dc559a597fd62099a367ac653d6df5a7fc49cba958386e9bc883/diff",
	                "MergedDir": "/var/lib/docker/overlay2/931256100d8e65b23ba64c4b9b4f9a56924751e9b3fd9996df81f1c691ee085b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/931256100d8e65b23ba64c4b9b4f9a56924751e9b3fd9996df81f1c691ee085b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/931256100d8e65b23ba64c4b9b4f9a56924751e9b3fd9996df81f1c691ee085b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-130723",
	                "Source": "/var/lib/docker/volumes/running-upgrade-130723/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-130723",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-130723",
	                "name.minikube.sigs.k8s.io": "running-upgrade-130723",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fdc9893db07750fbb659cbf3be643feb38f149e5d3023d109f8e694e6ec0b977",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52369"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52370"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52368"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/fdc9893db077",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "469b88cbf1262e22d35f8f07209205c142b8424cd0b031feb250e00125d6451b",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "605c9d610329c81415a9a3659d318d78a2c0d04fb9f7008971ba10ffbce0f25e",
	                    "EndpointID": "469b88cbf1262e22d35f8f07209205c142b8424cd0b031feb250e00125d6451b",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-130723 -n running-upgrade-130723
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-130723 -n running-upgrade-130723: exit status 6 (390.195981ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 13:08:30.260522   14013 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-130723" does not appear in /Users/jenkins/minikube-integration/15565-2761/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-130723" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-130723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-130723
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-130723: (2.313577661s)
--- FAIL: TestRunningBinaryUpgrade (68.95s)

                                                
                                    
x
+
TestKubernetesUpgrade (582.12s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-130931 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
E0108 13:09:40.713631    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/skaffold-130345/client.crt: no such file or directory
E0108 13:09:40.718784    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/skaffold-130345/client.crt: no such file or directory
E0108 13:09:40.729001    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/skaffold-130345/client.crt: no such file or directory
E0108 13:09:40.749699    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/skaffold-130345/client.crt: no such file or directory
E0108 13:09:40.789873    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/skaffold-130345/client.crt: no such file or directory
E0108 13:09:40.869984    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/skaffold-130345/client.crt: no such file or directory
E0108 13:09:41.030454    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/skaffold-130345/client.crt: no such file or directory
E0108 13:09:41.350613    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/skaffold-130345/client.crt: no such file or directory
E0108 13:09:41.990841    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/skaffold-130345/client.crt: no such file or directory
E0108 13:09:43.272726    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/skaffold-130345/client.crt: no such file or directory
E0108 13:09:45.833100    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/skaffold-130345/client.crt: no such file or directory
E0108 13:09:50.954066    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/skaffold-130345/client.crt: no such file or directory
E0108 13:09:59.657617    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
E0108 13:10:01.194876    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/skaffold-130345/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-130931 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109 (4m10.657008979s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-130931] minikube v1.28.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2761/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-130931 in cluster kubernetes-upgrade-130931
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.21 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 13:09:31.826753   14402 out.go:296] Setting OutFile to fd 1 ...
	I0108 13:09:31.826928   14402 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 13:09:31.826933   14402 out.go:309] Setting ErrFile to fd 2...
	I0108 13:09:31.826937   14402 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 13:09:31.827048   14402 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-2761/.minikube/bin
	I0108 13:09:31.827593   14402 out.go:303] Setting JSON to false
	I0108 13:09:31.846467   14402 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4144,"bootTime":1673208027,"procs":412,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0108 13:09:31.846584   14402 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0108 13:09:31.868299   14402 out.go:177] * [kubernetes-upgrade-130931] minikube v1.28.0 on Darwin 13.0.1
	I0108 13:09:31.910229   14402 notify.go:220] Checking for updates...
	I0108 13:09:31.931840   14402 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 13:09:31.953143   14402 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2761/kubeconfig
	I0108 13:09:31.974256   14402 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0108 13:09:31.996276   14402 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 13:09:32.017307   14402 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube
	I0108 13:09:32.039444   14402 config.go:180] Loaded profile config "cert-expiration-130630": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 13:09:32.039504   14402 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 13:09:32.099468   14402 docker.go:137] docker version: linux-20.10.21
	I0108 13:09:32.099611   14402 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 13:09:32.240164   14402 info.go:266] docker info: {ID:5ZGC:6Z7C:CMVQ:QZDS:ZWTS:M4CQ:373D:RAJX:R4QB:QI3D:27UQ:FR5B Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-08 21:09:32.149824002 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0108 13:09:32.283475   14402 out.go:177] * Using the docker driver based on user configuration
	I0108 13:09:32.304740   14402 start.go:294] selected driver: docker
	I0108 13:09:32.304771   14402 start.go:838] validating driver "docker" against <nil>
	I0108 13:09:32.304798   14402 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 13:09:32.308778   14402 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 13:09:32.450426   14402 info.go:266] docker info: {ID:5ZGC:6Z7C:CMVQ:QZDS:ZWTS:M4CQ:373D:RAJX:R4QB:QI3D:27UQ:FR5B Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-08 21:09:32.359080804 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0108 13:09:32.450549   14402 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I0108 13:09:32.450684   14402 start_flags.go:892] Wait components to verify : map[apiserver:true system_pods:true]
	I0108 13:09:32.472638   14402 out.go:177] * Using Docker Desktop driver with root privileges
	I0108 13:09:32.494212   14402 cni.go:95] Creating CNI manager for ""
	I0108 13:09:32.494244   14402 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0108 13:09:32.494263   14402 start_flags.go:317] config:
	{Name:kubernetes-upgrade-130931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-130931 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 13:09:32.516281   14402 out.go:177] * Starting control plane node kubernetes-upgrade-130931 in cluster kubernetes-upgrade-130931
	I0108 13:09:32.557898   14402 cache.go:120] Beginning downloading kic base image for docker with docker
	I0108 13:09:32.579285   14402 out.go:177] * Pulling base image ...
	I0108 13:09:32.621265   14402 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0108 13:09:32.621267   14402 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 13:09:32.621361   14402 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0108 13:09:32.621385   14402 cache.go:57] Caching tarball of preloaded images
	I0108 13:09:32.621634   14402 preload.go:174] Found /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 13:09:32.622268   14402 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0108 13:09:32.622800   14402 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/config.json ...
	I0108 13:09:32.622895   14402 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/config.json: {Name:mk327c852bc084f1aa219efa90d6a5fb69aeab77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:09:32.677307   14402 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 13:09:32.677337   14402 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 13:09:32.677437   14402 cache.go:193] Successfully downloaded all kic artifacts
	I0108 13:09:32.677485   14402 start.go:364] acquiring machines lock for kubernetes-upgrade-130931: {Name:mk4f787016e0e08c82b479a38e6950a2596b9276 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 13:09:32.677643   14402 start.go:368] acquired machines lock for "kubernetes-upgrade-130931" in 147.053µs
	I0108 13:09:32.677675   14402 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-130931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-130931 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0108 13:09:32.677729   14402 start.go:125] createHost starting for "" (driver="docker")
	I0108 13:09:32.721387   14402 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0108 13:09:32.721796   14402 start.go:159] libmachine.API.Create for "kubernetes-upgrade-130931" (driver="docker")
	I0108 13:09:32.721844   14402 client.go:168] LocalClient.Create starting
	I0108 13:09:32.722049   14402 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem
	I0108 13:09:32.722138   14402 main.go:134] libmachine: Decoding PEM data...
	I0108 13:09:32.722171   14402 main.go:134] libmachine: Parsing certificate...
	I0108 13:09:32.722286   14402 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem
	I0108 13:09:32.722358   14402 main.go:134] libmachine: Decoding PEM data...
	I0108 13:09:32.722377   14402 main.go:134] libmachine: Parsing certificate...
	I0108 13:09:32.723236   14402 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-130931 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0108 13:09:32.778013   14402 cli_runner.go:211] docker network inspect kubernetes-upgrade-130931 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0108 13:09:32.778122   14402 network_create.go:272] running [docker network inspect kubernetes-upgrade-130931] to gather additional debugging logs...
	I0108 13:09:32.778147   14402 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-130931
	W0108 13:09:32.831699   14402 cli_runner.go:211] docker network inspect kubernetes-upgrade-130931 returned with exit code 1
	I0108 13:09:32.831722   14402 network_create.go:275] error running [docker network inspect kubernetes-upgrade-130931]: docker network inspect kubernetes-upgrade-130931: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-130931
	I0108 13:09:32.831735   14402 network_create.go:277] output of [docker network inspect kubernetes-upgrade-130931]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-130931
	
	** /stderr **
	I0108 13:09:32.831838   14402 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 13:09:32.886681   14402 network.go:306] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000012e10] misses:0}
	I0108 13:09:32.886721   14402 network.go:239] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0108 13:09:32.886733   14402 network_create.go:115] attempt to create docker network kubernetes-upgrade-130931 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0108 13:09:32.886822   14402 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-130931 kubernetes-upgrade-130931
	W0108 13:09:32.941828   14402 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-130931 kubernetes-upgrade-130931 returned with exit code 1
	W0108 13:09:32.941867   14402 network_create.go:107] failed to create docker network kubernetes-upgrade-130931 192.168.49.0/24, will retry: subnet is taken
	I0108 13:09:32.942110   14402 network.go:297] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000012e10] amended:false}} dirty:map[] misses:0}
	I0108 13:09:32.942129   14402 network.go:242] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0108 13:09:32.942324   14402 network.go:306] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000012e10] amended:true}} dirty:map[192.168.49.0:0xc000012e10 192.168.58.0:0xc00063c0c0] misses:0}
	I0108 13:09:32.942339   14402 network.go:239] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0108 13:09:32.942348   14402 network_create.go:115] attempt to create docker network kubernetes-upgrade-130931 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0108 13:09:32.942435   14402 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-130931 kubernetes-upgrade-130931
	W0108 13:09:32.996862   14402 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-130931 kubernetes-upgrade-130931 returned with exit code 1
	W0108 13:09:32.996902   14402 network_create.go:107] failed to create docker network kubernetes-upgrade-130931 192.168.58.0/24, will retry: subnet is taken
	I0108 13:09:32.997172   14402 network.go:297] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000012e10] amended:true}} dirty:map[192.168.49.0:0xc000012e10 192.168.58.0:0xc00063c0c0] misses:1}
	I0108 13:09:32.997191   14402 network.go:242] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0108 13:09:32.997412   14402 network.go:306] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000012e10] amended:true}} dirty:map[192.168.49.0:0xc000012e10 192.168.58.0:0xc00063c0c0 192.168.67.0:0xc000114360] misses:1}
	I0108 13:09:32.997423   14402 network.go:239] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0108 13:09:32.997435   14402 network_create.go:115] attempt to create docker network kubernetes-upgrade-130931 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0108 13:09:32.997531   14402 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-130931 kubernetes-upgrade-130931
	W0108 13:09:33.051235   14402 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-130931 kubernetes-upgrade-130931 returned with exit code 1
	W0108 13:09:33.051284   14402 network_create.go:107] failed to create docker network kubernetes-upgrade-130931 192.168.67.0/24, will retry: subnet is taken
	I0108 13:09:33.051556   14402 network.go:297] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000012e10] amended:true}} dirty:map[192.168.49.0:0xc000012e10 192.168.58.0:0xc00063c0c0 192.168.67.0:0xc000114360] misses:2}
	I0108 13:09:33.051576   14402 network.go:242] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0108 13:09:33.051781   14402 network.go:306] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000012e10] amended:true}} dirty:map[192.168.49.0:0xc000012e10 192.168.58.0:0xc00063c0c0 192.168.67.0:0xc000114360 192.168.76.0:0xc000c02020] misses:2}
	I0108 13:09:33.051796   14402 network.go:239] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0108 13:09:33.051804   14402 network_create.go:115] attempt to create docker network kubernetes-upgrade-130931 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0108 13:09:33.051901   14402 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-130931 kubernetes-upgrade-130931
	I0108 13:09:33.139535   14402 network_create.go:99] docker network kubernetes-upgrade-130931 192.168.76.0/24 created
	I0108 13:09:33.139569   14402 kic.go:106] calculated static IP "192.168.76.2" for the "kubernetes-upgrade-130931" container
	I0108 13:09:33.139694   14402 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0108 13:09:33.196689   14402 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-130931 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-130931 --label created_by.minikube.sigs.k8s.io=true
	I0108 13:09:33.253242   14402 oci.go:103] Successfully created a docker volume kubernetes-upgrade-130931
	I0108 13:09:33.253392   14402 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-130931-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-130931 --entrypoint /usr/bin/test -v kubernetes-upgrade-130931:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -d /var/lib
	I0108 13:09:33.716096   14402 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-130931
	I0108 13:09:33.716128   14402 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0108 13:09:33.716144   14402 kic.go:179] Starting extracting preloaded images to volume ...
	I0108 13:09:33.716272   14402 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-130931:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir
	I0108 13:09:39.491309   14402 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-130931:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir: (5.774950005s)
	I0108 13:09:39.491329   14402 kic.go:188] duration metric: took 5.775164 seconds to extract preloaded images to volume
	I0108 13:09:39.491462   14402 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0108 13:09:39.637325   14402 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-130931 --name kubernetes-upgrade-130931 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-130931 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-130931 --network kubernetes-upgrade-130931 --ip 192.168.76.2 --volume kubernetes-upgrade-130931:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c
	I0108 13:09:40.002915   14402 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-130931 --format={{.State.Running}}
	I0108 13:09:40.066975   14402 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-130931 --format={{.State.Status}}
	I0108 13:09:40.132461   14402 cli_runner.go:164] Run: docker exec kubernetes-upgrade-130931 stat /var/lib/dpkg/alternatives/iptables
	I0108 13:09:40.252138   14402 oci.go:144] the created container "kubernetes-upgrade-130931" has a running status.
	I0108 13:09:40.252169   14402 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/kubernetes-upgrade-130931/id_rsa...
	I0108 13:09:40.389518   14402 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/kubernetes-upgrade-130931/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0108 13:09:40.498445   14402 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-130931 --format={{.State.Status}}
	I0108 13:09:40.557913   14402 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0108 13:09:40.557934   14402 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-130931 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0108 13:09:40.658059   14402 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-130931 --format={{.State.Status}}
	I0108 13:09:40.716070   14402 machine.go:88] provisioning docker machine ...
	I0108 13:09:40.716114   14402 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-130931"
	I0108 13:09:40.716242   14402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-130931
	I0108 13:09:40.775821   14402 main.go:134] libmachine: Using SSH client type: native
	I0108 13:09:40.776016   14402 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 52486 <nil> <nil>}
	I0108 13:09:40.776029   14402 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-130931 && echo "kubernetes-upgrade-130931" | sudo tee /etc/hostname
	I0108 13:09:40.904502   14402 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-130931
	
	I0108 13:09:40.904634   14402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-130931
	I0108 13:09:40.962747   14402 main.go:134] libmachine: Using SSH client type: native
	I0108 13:09:40.962909   14402 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 52486 <nil> <nil>}
	I0108 13:09:40.962924   14402 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-130931' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-130931/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-130931' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 13:09:41.081074   14402 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 13:09:41.081094   14402 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-2761/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-2761/.minikube}
	I0108 13:09:41.081117   14402 ubuntu.go:177] setting up certificates
	I0108 13:09:41.081126   14402 provision.go:83] configureAuth start
	I0108 13:09:41.081210   14402 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-130931
	I0108 13:09:41.192114   14402 provision.go:138] copyHostCerts
	I0108 13:09:41.192206   14402 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem, removing ...
	I0108 13:09:41.192214   14402 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem
	I0108 13:09:41.192355   14402 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem (1082 bytes)
	I0108 13:09:41.192570   14402 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem, removing ...
	I0108 13:09:41.192576   14402 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem
	I0108 13:09:41.192644   14402 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem (1123 bytes)
	I0108 13:09:41.192805   14402 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem, removing ...
	I0108 13:09:41.192811   14402 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem
	I0108 13:09:41.192882   14402 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem (1675 bytes)
	I0108 13:09:41.193041   14402 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-130931 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-130931]
	I0108 13:09:41.226898   14402 provision.go:172] copyRemoteCerts
	I0108 13:09:41.226963   14402 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 13:09:41.227052   14402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-130931
	I0108 13:09:41.288403   14402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52486 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/kubernetes-upgrade-130931/id_rsa Username:docker}
	I0108 13:09:41.375039   14402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 13:09:41.392389   14402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 13:09:41.409602   14402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0108 13:09:41.426713   14402 provision.go:86] duration metric: configureAuth took 345.570913ms
	I0108 13:09:41.426727   14402 ubuntu.go:193] setting minikube options for container-runtime
	I0108 13:09:41.426954   14402 config.go:180] Loaded profile config "kubernetes-upgrade-130931": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0108 13:09:41.427049   14402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-130931
	I0108 13:09:41.484518   14402 main.go:134] libmachine: Using SSH client type: native
	I0108 13:09:41.484675   14402 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 52486 <nil> <nil>}
	I0108 13:09:41.484688   14402 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0108 13:09:41.603751   14402 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0108 13:09:41.603769   14402 ubuntu.go:71] root file system type: overlay
	I0108 13:09:41.603918   14402 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0108 13:09:41.604025   14402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-130931
	I0108 13:09:41.661817   14402 main.go:134] libmachine: Using SSH client type: native
	I0108 13:09:41.661988   14402 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 52486 <nil> <nil>}
	I0108 13:09:41.662037   14402 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0108 13:09:41.786171   14402 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0108 13:09:41.786293   14402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-130931
	I0108 13:09:41.862994   14402 main.go:134] libmachine: Using SSH client type: native
	I0108 13:09:41.863158   14402 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 52486 <nil> <nil>}
	I0108 13:09:41.863171   14402 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0108 13:09:42.458116   14402 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-10-25 18:00:04.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-08 21:09:41.784148789 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0108 13:09:42.458137   14402 machine.go:91] provisioned docker machine in 1.742042007s
	I0108 13:09:42.458144   14402 client.go:171] LocalClient.Create took 9.736255525s
	I0108 13:09:42.458162   14402 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-130931" took 9.736334248s
	I0108 13:09:42.458172   14402 start.go:300] post-start starting for "kubernetes-upgrade-130931" (driver="docker")
	I0108 13:09:42.458176   14402 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 13:09:42.458254   14402 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 13:09:42.458325   14402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-130931
	I0108 13:09:42.517249   14402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52486 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/kubernetes-upgrade-130931/id_rsa Username:docker}
	I0108 13:09:42.603974   14402 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 13:09:42.607537   14402 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 13:09:42.607553   14402 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 13:09:42.607560   14402 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 13:09:42.607565   14402 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 13:09:42.607575   14402 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2761/.minikube/addons for local assets ...
	I0108 13:09:42.607674   14402 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2761/.minikube/files for local assets ...
	I0108 13:09:42.607861   14402 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem -> 40832.pem in /etc/ssl/certs
	I0108 13:09:42.608080   14402 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 13:09:42.615577   14402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem --> /etc/ssl/certs/40832.pem (1708 bytes)
	I0108 13:09:42.632637   14402 start.go:303] post-start completed in 174.456846ms
	I0108 13:09:42.633193   14402 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-130931
	I0108 13:09:42.692501   14402 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/config.json ...
	I0108 13:09:42.692942   14402 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 13:09:42.693010   14402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-130931
	I0108 13:09:42.752059   14402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52486 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/kubernetes-upgrade-130931/id_rsa Username:docker}
	I0108 13:09:42.837300   14402 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 13:09:42.842019   14402 start.go:128] duration metric: createHost completed in 10.164237244s
	I0108 13:09:42.842038   14402 start.go:83] releasing machines lock for "kubernetes-upgrade-130931", held for 10.164348466s
	I0108 13:09:42.842144   14402 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-130931
	I0108 13:09:42.899278   14402 ssh_runner.go:195] Run: cat /version.json
	I0108 13:09:42.899278   14402 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0108 13:09:42.899371   14402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-130931
	I0108 13:09:42.899385   14402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-130931
	I0108 13:09:42.963851   14402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52486 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/kubernetes-upgrade-130931/id_rsa Username:docker}
	I0108 13:09:42.964593   14402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52486 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/kubernetes-upgrade-130931/id_rsa Username:docker}
	I0108 13:09:43.047987   14402 ssh_runner.go:195] Run: systemctl --version
	I0108 13:09:43.311858   14402 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0108 13:09:43.322029   14402 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0108 13:09:43.322091   14402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 13:09:43.332484   14402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 13:09:43.345434   14402 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0108 13:09:43.423570   14402 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0108 13:09:43.496351   14402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 13:09:43.562934   14402 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 13:09:43.767556   14402 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 13:09:43.797412   14402 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 13:09:43.871106   14402 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.21 ...
	I0108 13:09:43.871355   14402 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-130931 dig +short host.docker.internal
	I0108 13:09:43.989567   14402 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0108 13:09:43.989680   14402 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0108 13:09:43.994122   14402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 13:09:44.004027   14402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-130931
	I0108 13:09:44.064096   14402 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0108 13:09:44.064187   14402 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 13:09:44.088865   14402 docker.go:613] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0108 13:09:44.088885   14402 docker.go:543] Images already preloaded, skipping extraction
	I0108 13:09:44.088981   14402 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 13:09:44.113052   14402 docker.go:613] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0108 13:09:44.113070   14402 cache_images.go:84] Images are preloaded, skipping loading
	I0108 13:09:44.113179   14402 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0108 13:09:44.186106   14402 cni.go:95] Creating CNI manager for ""
	I0108 13:09:44.186125   14402 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0108 13:09:44.186144   14402 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 13:09:44.186166   14402 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-130931 NodeName:kubernetes-upgrade-130931 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 13:09:44.186301   14402 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-130931"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-130931
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 13:09:44.186382   14402 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-130931 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-130931 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 13:09:44.186461   14402 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0108 13:09:44.194707   14402 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 13:09:44.194773   14402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 13:09:44.202226   14402 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I0108 13:09:44.215163   14402 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 13:09:44.228100   14402 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0108 13:09:44.241734   14402 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0108 13:09:44.245660   14402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 13:09:44.255384   14402 certs.go:54] Setting up /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931 for IP: 192.168.76.2
	I0108 13:09:44.255537   14402 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.key
	I0108 13:09:44.255615   14402 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.key
	I0108 13:09:44.255665   14402 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/client.key
	I0108 13:09:44.255684   14402 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/client.crt with IP's: []
	I0108 13:09:44.379721   14402 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/client.crt ...
	I0108 13:09:44.379735   14402 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/client.crt: {Name:mkae28b9d525e2bcf030072f3b2b4ee20b0ef8c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:09:44.380048   14402 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/client.key ...
	I0108 13:09:44.380057   14402 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/client.key: {Name:mk70318b2fa203896346598afe5639c7ef54b742 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:09:44.380285   14402 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/apiserver.key.31bdca25
	I0108 13:09:44.380304   14402 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 13:09:44.491074   14402 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/apiserver.crt.31bdca25 ...
	I0108 13:09:44.491091   14402 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/apiserver.crt.31bdca25: {Name:mk290326848309f0cbeec86f0716f47e673e0e15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:09:44.491383   14402 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/apiserver.key.31bdca25 ...
	I0108 13:09:44.491392   14402 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/apiserver.key.31bdca25: {Name:mk7619b4fccb5f6ba63826706c563c0e29c5f90b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:09:44.491586   14402 certs.go:320] copying /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/apiserver.crt
	I0108 13:09:44.491758   14402 certs.go:324] copying /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/apiserver.key
	I0108 13:09:44.491928   14402 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/proxy-client.key
	I0108 13:09:44.491947   14402 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/proxy-client.crt with IP's: []
	I0108 13:09:44.711982   14402 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/proxy-client.crt ...
	I0108 13:09:44.711998   14402 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/proxy-client.crt: {Name:mkdd1bef353338463021ca54cbd0ee974768017e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:09:44.712291   14402 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/proxy-client.key ...
	I0108 13:09:44.712302   14402 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/proxy-client.key: {Name:mkd4981041f210ae854a8a3c1e5755e85d416665 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:09:44.712850   14402 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083.pem (1338 bytes)
	W0108 13:09:44.712900   14402 certs.go:384] ignoring /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083_empty.pem, impossibly tiny 0 bytes
	I0108 13:09:44.712915   14402 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 13:09:44.712980   14402 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem (1082 bytes)
	I0108 13:09:44.713107   14402 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem (1123 bytes)
	I0108 13:09:44.713145   14402 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem (1675 bytes)
	I0108 13:09:44.713232   14402 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem (1708 bytes)
	I0108 13:09:44.713776   14402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 13:09:44.732719   14402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 13:09:44.750191   14402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 13:09:44.767598   14402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 13:09:44.784929   14402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 13:09:44.802113   14402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 13:09:44.819597   14402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 13:09:44.837036   14402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 13:09:44.854588   14402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 13:09:44.871743   14402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083.pem --> /usr/share/ca-certificates/4083.pem (1338 bytes)
	I0108 13:09:44.889364   14402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem --> /usr/share/ca-certificates/40832.pem (1708 bytes)
	I0108 13:09:44.906524   14402 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 13:09:44.919594   14402 ssh_runner.go:195] Run: openssl version
	I0108 13:09:44.925286   14402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 13:09:44.933873   14402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 13:09:44.938091   14402 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:27 /usr/share/ca-certificates/minikubeCA.pem
	I0108 13:09:44.938170   14402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 13:09:44.943981   14402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 13:09:44.952271   14402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4083.pem && ln -fs /usr/share/ca-certificates/4083.pem /etc/ssl/certs/4083.pem"
	I0108 13:09:44.960693   14402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4083.pem
	I0108 13:09:44.964805   14402 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:32 /usr/share/ca-certificates/4083.pem
	I0108 13:09:44.964863   14402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4083.pem
	I0108 13:09:44.970465   14402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4083.pem /etc/ssl/certs/51391683.0"
	I0108 13:09:44.978555   14402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/40832.pem && ln -fs /usr/share/ca-certificates/40832.pem /etc/ssl/certs/40832.pem"
	I0108 13:09:44.987109   14402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/40832.pem
	I0108 13:09:44.991028   14402 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:32 /usr/share/ca-certificates/40832.pem
	I0108 13:09:44.991082   14402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/40832.pem
	I0108 13:09:44.996503   14402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/40832.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 13:09:45.004746   14402 kubeadm.go:396] StartCluster: {Name:kubernetes-upgrade-130931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-130931 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 13:09:45.004861   14402 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 13:09:45.029162   14402 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 13:09:45.037322   14402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 13:09:45.045329   14402 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 13:09:45.045414   14402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 13:09:45.053224   14402 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 13:09:45.053253   14402 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 13:09:45.102986   14402 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I0108 13:09:45.103028   14402 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 13:09:45.415001   14402 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 13:09:45.415170   14402 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 13:09:45.415264   14402 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 13:09:45.644953   14402 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 13:09:45.645745   14402 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 13:09:45.652209   14402 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I0108 13:09:45.722953   14402 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 13:09:45.766416   14402 out.go:204]   - Generating certificates and keys ...
	I0108 13:09:45.766492   14402 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 13:09:45.766557   14402 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 13:09:45.829089   14402 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 13:09:45.918883   14402 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I0108 13:09:46.105092   14402 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I0108 13:09:46.229951   14402 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I0108 13:09:46.453856   14402 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I0108 13:09:46.454003   14402 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-130931 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0108 13:09:46.616802   14402 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I0108 13:09:46.616987   14402 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-130931 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0108 13:09:46.779863   14402 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 13:09:46.864575   14402 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 13:09:47.015872   14402 kubeadm.go:317] [certs] Generating "sa" key and public key
	I0108 13:09:47.015976   14402 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 13:09:47.154092   14402 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 13:09:47.343775   14402 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 13:09:47.406071   14402 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 13:09:47.584950   14402 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 13:09:47.585700   14402 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 13:09:47.607251   14402 out.go:204]   - Booting up control plane ...
	I0108 13:09:47.607362   14402 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 13:09:47.607431   14402 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 13:09:47.607484   14402 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 13:09:47.607549   14402 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 13:09:47.607682   14402 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 13:10:27.595768   14402 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0108 13:10:27.597618   14402 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 13:10:27.597814   14402 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 13:10:32.598604   14402 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 13:10:32.598773   14402 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 13:10:42.599789   14402 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 13:10:42.599914   14402 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 13:11:02.600621   14402 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 13:11:02.600800   14402 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 13:11:42.602230   14402 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 13:11:42.602573   14402 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 13:11:42.602597   14402 kubeadm.go:317] 
	I0108 13:11:42.602625   14402 kubeadm.go:317] Unfortunately, an error has occurred:
	I0108 13:11:42.602680   14402 kubeadm.go:317] 	timed out waiting for the condition
	I0108 13:11:42.602709   14402 kubeadm.go:317] 
	I0108 13:11:42.602740   14402 kubeadm.go:317] This error is likely caused by:
	I0108 13:11:42.602764   14402 kubeadm.go:317] 	- The kubelet is not running
	I0108 13:11:42.602834   14402 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0108 13:11:42.602838   14402 kubeadm.go:317] 
	I0108 13:11:42.602922   14402 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0108 13:11:42.602990   14402 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0108 13:11:42.603017   14402 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0108 13:11:42.603023   14402 kubeadm.go:317] 
	I0108 13:11:42.603141   14402 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0108 13:11:42.603312   14402 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0108 13:11:42.603394   14402 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I0108 13:11:42.603491   14402 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I0108 13:11:42.603546   14402 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0108 13:11:42.603605   14402 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I0108 13:11:42.606790   14402 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0108 13:11:42.607003   14402 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
	I0108 13:11:42.607192   14402 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 13:11:42.607299   14402 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0108 13:11:42.607389   14402 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W0108 13:11:42.607623   14402 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-130931 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-130931 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-130931 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-130931 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0108 13:11:42.607665   14402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0108 13:11:43.052183   14402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 13:11:43.063047   14402 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 13:11:43.063127   14402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 13:11:43.075207   14402 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 13:11:43.075237   14402 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 13:11:43.133342   14402 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I0108 13:11:43.133410   14402 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 13:11:43.527096   14402 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 13:11:43.527224   14402 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 13:11:43.527313   14402 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 13:11:43.838306   14402 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 13:11:43.840034   14402 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 13:11:43.850654   14402 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I0108 13:11:43.938638   14402 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 13:11:43.992146   14402 out.go:204]   - Generating certificates and keys ...
	I0108 13:11:43.992349   14402 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 13:11:43.992447   14402 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 13:11:43.992563   14402 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 13:11:43.992700   14402 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0108 13:11:43.992833   14402 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 13:11:43.992893   14402 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0108 13:11:43.992959   14402 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0108 13:11:43.993024   14402 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0108 13:11:43.993081   14402 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 13:11:43.993134   14402 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 13:11:43.993173   14402 kubeadm.go:317] [certs] Using the existing "sa" key
	I0108 13:11:43.993236   14402 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 13:11:44.050649   14402 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 13:11:44.149514   14402 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 13:11:44.317656   14402 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 13:11:44.703962   14402 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 13:11:44.704737   14402 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 13:11:44.731060   14402 out.go:204]   - Booting up control plane ...
	I0108 13:11:44.731273   14402 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 13:11:44.731438   14402 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 13:11:44.731563   14402 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 13:11:44.731700   14402 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 13:11:44.731965   14402 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 13:12:24.713555   14402 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0108 13:12:24.714161   14402 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 13:12:24.714408   14402 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 13:12:29.715973   14402 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 13:12:29.716185   14402 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 13:12:39.717801   14402 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 13:12:39.718019   14402 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 13:12:59.719201   14402 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 13:12:59.719439   14402 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 13:13:39.719420   14402 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 13:13:39.719584   14402 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 13:13:39.719596   14402 kubeadm.go:317] 
	I0108 13:13:39.719632   14402 kubeadm.go:317] Unfortunately, an error has occurred:
	I0108 13:13:39.719671   14402 kubeadm.go:317] 	timed out waiting for the condition
	I0108 13:13:39.719678   14402 kubeadm.go:317] 
	I0108 13:13:39.719703   14402 kubeadm.go:317] This error is likely caused by:
	I0108 13:13:39.719728   14402 kubeadm.go:317] 	- The kubelet is not running
	I0108 13:13:39.719808   14402 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0108 13:13:39.719812   14402 kubeadm.go:317] 
	I0108 13:13:39.719906   14402 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0108 13:13:39.719933   14402 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0108 13:13:39.719970   14402 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0108 13:13:39.719980   14402 kubeadm.go:317] 
	I0108 13:13:39.720063   14402 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0108 13:13:39.720130   14402 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0108 13:13:39.720195   14402 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I0108 13:13:39.720233   14402 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I0108 13:13:39.720292   14402 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0108 13:13:39.720324   14402 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I0108 13:13:39.723177   14402 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0108 13:13:39.723291   14402 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
	I0108 13:13:39.723380   14402 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 13:13:39.723439   14402 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0108 13:13:39.723507   14402 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I0108 13:13:39.723522   14402 kubeadm.go:398] StartCluster complete in 3m54.717760021s
	I0108 13:13:39.723623   14402 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:13:39.747038   14402 logs.go:274] 0 containers: []
	W0108 13:13:39.747053   14402 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:13:39.747132   14402 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:13:39.771098   14402 logs.go:274] 0 containers: []
	W0108 13:13:39.771113   14402 logs.go:276] No container was found matching "etcd"
	I0108 13:13:39.771199   14402 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:13:39.794219   14402 logs.go:274] 0 containers: []
	W0108 13:13:39.794233   14402 logs.go:276] No container was found matching "coredns"
	I0108 13:13:39.794314   14402 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:13:39.816913   14402 logs.go:274] 0 containers: []
	W0108 13:13:39.816928   14402 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:13:39.817037   14402 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:13:39.840924   14402 logs.go:274] 0 containers: []
	W0108 13:13:39.840941   14402 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:13:39.841024   14402 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:13:39.863744   14402 logs.go:274] 0 containers: []
	W0108 13:13:39.863759   14402 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:13:39.863848   14402 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:13:39.887810   14402 logs.go:274] 0 containers: []
	W0108 13:13:39.887822   14402 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:13:39.887901   14402 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:13:39.910627   14402 logs.go:274] 0 containers: []
	W0108 13:13:39.910641   14402 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:13:39.910649   14402 logs.go:123] Gathering logs for kubelet ...
	I0108 13:13:39.910655   14402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:13:39.951009   14402 logs.go:123] Gathering logs for dmesg ...
	I0108 13:13:39.951024   14402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:13:39.968155   14402 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:13:39.968172   14402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:13:40.030925   14402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:13:40.030940   14402 logs.go:123] Gathering logs for Docker ...
	I0108 13:13:40.030947   14402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:13:40.047376   14402 logs.go:123] Gathering logs for container status ...
	I0108 13:13:40.047390   14402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:13:42.100132   14402 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052721319s)
	W0108 13:13:42.100252   14402 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0108 13:13:42.100268   14402 out.go:239] * 
	* 
	W0108 13:13:42.100418   14402 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0108 13:13:42.100434   14402 out.go:239] * 
	* 
	W0108 13:13:42.101079   14402 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 13:13:42.197021   14402 out.go:177] 
	W0108 13:13:42.272092   14402 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0108 13:13:42.272226   14402 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0108 13:13:42.272315   14402 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0108 13:13:42.315641   14402 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:231: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-130931 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-130931

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-130931: (1.823852398s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-130931 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-130931 status --format={{.Host}}: exit status 7 (142.951521ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-130931 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-130931 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker : (4m36.640184373s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-130931 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-130931 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-130931 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (410.233619ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-130931] minikube v1.28.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2761/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.25.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-130931
	    minikube start -p kubernetes-upgrade-130931 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1309312 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.25.3, by running:
	    
	    minikube start -p kubernetes-upgrade-130931 --kubernetes-version=v1.25.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-130931 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:282: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-130931 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker : (45.48874336s)
version_upgrade_test.go:286: *** TestKubernetesUpgrade FAILED at 2023-01-08 13:19:07.073779 -0800 PST m=+3144.038936630
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-130931
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-130931:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "926444117dea88fce7287398c59bd457ac4997086e7b69d5cbbb4da30dfdc809",
	        "Created": "2023-01-08T21:09:39.694013343Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 179480,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-08T21:13:46.015177345Z",
	            "FinishedAt": "2023-01-08T21:13:43.061733485Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/926444117dea88fce7287398c59bd457ac4997086e7b69d5cbbb4da30dfdc809/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/926444117dea88fce7287398c59bd457ac4997086e7b69d5cbbb4da30dfdc809/hostname",
	        "HostsPath": "/var/lib/docker/containers/926444117dea88fce7287398c59bd457ac4997086e7b69d5cbbb4da30dfdc809/hosts",
	        "LogPath": "/var/lib/docker/containers/926444117dea88fce7287398c59bd457ac4997086e7b69d5cbbb4da30dfdc809/926444117dea88fce7287398c59bd457ac4997086e7b69d5cbbb4da30dfdc809-json.log",
	        "Name": "/kubernetes-upgrade-130931",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-130931:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-130931",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/adfa2c09b827b0f73829d027fb642879123a3f6ed3bd71e2ff9581ef6588d277-init/diff:/var/lib/docker/overlay2/cf478f0005761c12f45c53e8731191461bd51878189b802beb3f80527bc3582c/diff:/var/lib/docker/overlay2/50547848ed232979e0349fdf0641681247e43e6ddcd120dbefccdce45eba4793/diff:/var/lib/docker/overlay2/7a8415f97e49b013d35a8b27eaf2a6be470c2a985fcd6de4711cb0018f555a3d/diff:/var/lib/docker/overlay2/435dd0b905de8bd2d6b23782418e6d76b0957f55123fe106e3b62d08c0f3da13/diff:/var/lib/docker/overlay2/70ca2e846954d00d296abfcdcefb0db4959d8ce6650e54b1071b655f7c71c823/diff:/var/lib/docker/overlay2/62715d50ae74531df8ef33be95bc933c79334fbfa0ace0bad5efc678fb43d860/diff:/var/lib/docker/overlay2/857f757c27b37807332ef8a52061b2e02614567dadd8631c9414bcf1e51c7eb6/diff:/var/lib/docker/overlay2/d3d508987063e3e43530c93ff3bb9fc842f7f56e79f9babdb9a3608990dc911e/diff:/var/lib/docker/overlay2/b9307635c9b780f8ea6af04393e82329578be8ced22abd92633ac5912ce752d7/diff:/var/lib/docker/overlay2/ab3124
e34a60bd3d2f554d712f9db28fed57b9030105f996b2a40b6c5c68e6a3/diff:/var/lib/docker/overlay2/2664538922f7cea7eec3238db144935f7380d439e3aaf6611f7f6232515b6c70/diff:/var/lib/docker/overlay2/fcf4ff3c9f738d263ccde0d59a8f0bbbf77d5fe10a37a0b64782c90258c52f05/diff:/var/lib/docker/overlay2/9ebb5fb88ffad88aca62110ea1902a046eb8d27eab4d1b03380f2799a61190e4/diff:/var/lib/docker/overlay2/16c6977d1dcb3aef6968fa378be9d39da565962707fb1c2ebcc08741b3ebabb0/diff:/var/lib/docker/overlay2/4a1a615ba2290b96a2289b3709f9e4e2b7585a7880463549ed90c765c1cf364b/diff:/var/lib/docker/overlay2/8875d4ae4e008b8ed7a6c64b581bc9a7437e20bc59a10db038658c3c3abbd626/diff:/var/lib/docker/overlay2/a92bc2bed5e566a6a12e091f0b6adcc5120ec1a5a04a079614da38b8e08b4f4d/diff:/var/lib/docker/overlay2/507f4a1c4f60a4445244bd4611fbdebeda31c842886f650aff0c93fe1cbf551b/diff:/var/lib/docker/overlay2/4b6f57707d2af391e02b8fbab74a152c38778d850194db7c366c972d607c3683/diff:/var/lib/docker/overlay2/30f07cc70078d1a1064ae4c014017806ca9cab561445ba4999d279d77ab9efd9/diff:/var/lib/d
ocker/overlay2/a7ce66498ad28650a9c447ffdd1776688091a1f96a77ba104690bbd632828084/diff:/var/lib/docker/overlay2/375e879a1c9abf773aadafa9214b4cd6a5fa848c3521ded951069c1ef16d03c8/diff:/var/lib/docker/overlay2/dbf6bd39c4440680d1fb7dcfc66134acd119d818a0da224feea03b15985518ef/diff:/var/lib/docker/overlay2/f5247f50460095d94d94f10c8f29a1106915f3f694a40dbc0ff0a7494ceef2d6/diff:/var/lib/docker/overlay2/eca77ea4b87f19d3e4b6258b307c944a60d8a11e38e520715736d86cfcb0a340/diff:/var/lib/docker/overlay2/af8edadcadb813c9b8bcb395db5b7025128f75336edf043daf159e86115fa2d0/diff:/var/lib/docker/overlay2/82696f404a416ef0c49184f767d3a67d76997ca4b3ab9f2553ab364b9e902189/diff:/var/lib/docker/overlay2/aa5f3a92ab78aa13af6b0e4ca676e887e32b388ad037098956622b2bb2d64653/diff:/var/lib/docker/overlay2/3fd93bd37311284bcd588f06d2e1157fcae183e793e58b9e91af55526752251b/diff:/var/lib/docker/overlay2/5cac080397d4de235a72e46ee68fdd622d9fba1dbd60139a59881df7cb97cdd3/diff:/var/lib/docker/overlay2/1534f7a89f3f0459a57d2264ddb9c4b2e95b9348c6c3fb6839c3f2cd1aa
7009a/diff:/var/lib/docker/overlay2/0fa983ab9147631e9188574a597cbb1ada8bd69b4eff49391c9704d239988f73/diff:/var/lib/docker/overlay2/2ff1f973faf98b7d46648d22c4c0cb73675d5b3f37e6906c457a45823a29fe1e/diff:/var/lib/docker/overlay2/1d56ab53b6c377c5835e50d09effb1a1a727279cb8883e5d4cda8c35b4600695/diff:/var/lib/docker/overlay2/903da5933dc4be1a0f9e38defe40072a669562fc25c401b8b9a02def3b94bec6/diff:/var/lib/docker/overlay2/4be7777ae41ce96ae10877862b8954fa1ee593061f9647f30de2ccdd036bb452/diff:/var/lib/docker/overlay2/ae284268a6cd8a67190129d99bdb6a97d27c88bfe4536cbdf20bc356c6cb5ad4/diff:/var/lib/docker/overlay2/207f47b4e74ecca6010612742ebe5cd0c8363dd1634d58f37b9df57cefc063f2/diff:/var/lib/docker/overlay2/65d59701773a038dc5533dece8ebc52ebf3efc833e94c91c470d1f6593bdf196/diff:/var/lib/docker/overlay2/3ae8859886568a0e539b79f17ace58f390ab402b4428c45188c2587640d73f10/diff:/var/lib/docker/overlay2/bf63d45714e6f77ee9a5cf0fd198e479af953d7ea25a6f1f76633e63bd9b827f/diff:/var/lib/docker/overlay2/ac8c76daac6f3c2d9c8ceee7ed9defe04f1a31
f0271684f4258c0f634ed1fce1/diff:/var/lib/docker/overlay2/1cd45a0f7910466989a7434f8eec249f0e295b686baad0e434a2d34dd6e82a47/diff:/var/lib/docker/overlay2/d72980245e92027e64b68ee0fc086b48f102ea405ffbebfd8220036fdbe805d6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/adfa2c09b827b0f73829d027fb642879123a3f6ed3bd71e2ff9581ef6588d277/merged",
	                "UpperDir": "/var/lib/docker/overlay2/adfa2c09b827b0f73829d027fb642879123a3f6ed3bd71e2ff9581ef6588d277/diff",
	                "WorkDir": "/var/lib/docker/overlay2/adfa2c09b827b0f73829d027fb642879123a3f6ed3bd71e2ff9581ef6588d277/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-130931",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-130931/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-130931",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-130931",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-130931",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7a93c11bc4f6ca59d0d777ea699c2bb2984319956835df602dc98543b5a9a53f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52782"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52783"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52784"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52786"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7a93c11bc4f6",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-130931": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "926444117dea",
	                        "kubernetes-upgrade-130931"
	                    ],
	                    "NetworkID": "50c4ef39ee8722389197ba0dd4bb6d192d1faae218c8d9f5b4cf029ebadb3582",
	                    "EndpointID": "ccd7288b92be592c2c60dc929925c3d53e13e575cbf0d1762a316e35ba999689",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-130931 -n kubernetes-upgrade-130931
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-130931 logs -n 25

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-130931 logs -n 25: (2.857412114s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p kubernetes-upgrade-130931   | kubernetes-upgrade-130931 | jenkins | v1.28.0 | 08 Jan 23 13:13 PST | 08 Jan 23 13:13 PST |
	| start   | -p NoKubernetes-131313         | NoKubernetes-131313       | jenkins | v1.28.0 | 08 Jan 23 13:13 PST | 08 Jan 23 13:13 PST |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-130931   | kubernetes-upgrade-130931 | jenkins | v1.28.0 | 08 Jan 23 13:13 PST | 08 Jan 23 13:18 PST |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-131313         | NoKubernetes-131313       | jenkins | v1.28.0 | 08 Jan 23 13:13 PST | 08 Jan 23 13:14 PST |
	| start   | -p NoKubernetes-131313         | NoKubernetes-131313       | jenkins | v1.28.0 | 08 Jan 23 13:14 PST | 08 Jan 23 13:14 PST |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-131313 sudo    | NoKubernetes-131313       | jenkins | v1.28.0 | 08 Jan 23 13:14 PST |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| profile | list                           | minikube                  | jenkins | v1.28.0 | 08 Jan 23 13:14 PST | 08 Jan 23 13:14 PST |
	| profile | list --output=json             | minikube                  | jenkins | v1.28.0 | 08 Jan 23 13:14 PST | 08 Jan 23 13:14 PST |
	| stop    | -p NoKubernetes-131313         | NoKubernetes-131313       | jenkins | v1.28.0 | 08 Jan 23 13:14 PST | 08 Jan 23 13:14 PST |
	| start   | -p NoKubernetes-131313         | NoKubernetes-131313       | jenkins | v1.28.0 | 08 Jan 23 13:14 PST | 08 Jan 23 13:14 PST |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-131313 sudo    | NoKubernetes-131313       | jenkins | v1.28.0 | 08 Jan 23 13:14 PST |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-131313         | NoKubernetes-131313       | jenkins | v1.28.0 | 08 Jan 23 13:14 PST | 08 Jan 23 13:14 PST |
	| start   | -p auto-130508 --memory=2048   | auto-130508               | jenkins | v1.28.0 | 08 Jan 23 13:14 PST | 08 Jan 23 13:15 PST |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m  |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| ssh     | -p auto-130508 pgrep -a        | auto-130508               | jenkins | v1.28.0 | 08 Jan 23 13:15 PST | 08 Jan 23 13:15 PST |
	|         | kubelet                        |                           |         |         |                     |                     |
	| delete  | -p auto-130508                 | auto-130508               | jenkins | v1.28.0 | 08 Jan 23 13:15 PST | 08 Jan 23 13:15 PST |
	| start   | -p kindnet-130508              | kindnet-130508            | jenkins | v1.28.0 | 08 Jan 23 13:15 PST | 08 Jan 23 13:16 PST |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m  |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=docker  |                           |         |         |                     |                     |
	| ssh     | -p kindnet-130508 pgrep -a     | kindnet-130508            | jenkins | v1.28.0 | 08 Jan 23 13:16 PST | 08 Jan 23 13:16 PST |
	|         | kubelet                        |                           |         |         |                     |                     |
	| delete  | -p kindnet-130508              | kindnet-130508            | jenkins | v1.28.0 | 08 Jan 23 13:17 PST | 08 Jan 23 13:17 PST |
	| start   | -p enable-default-cni-130508   | enable-default-cni-130508 | jenkins | v1.28.0 | 08 Jan 23 13:17 PST | 08 Jan 23 13:17 PST |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m  |                           |         |         |                     |                     |
	|         | --enable-default-cni=true      |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-130508   | enable-default-cni-130508 | jenkins | v1.28.0 | 08 Jan 23 13:17 PST | 08 Jan 23 13:17 PST |
	|         | pgrep -a kubelet               |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-130508   | enable-default-cni-130508 | jenkins | v1.28.0 | 08 Jan 23 13:18 PST | 08 Jan 23 13:18 PST |
	| start   | -p false-130508 --memory=2048  | false-130508              | jenkins | v1.28.0 | 08 Jan 23 13:18 PST | 08 Jan 23 13:18 PST |
	|         | --alsologtostderr --wait=true  |                           |         |         |                     |                     |
	|         | --wait-timeout=5m --cni=false  |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-130931   | kubernetes-upgrade-130931 | jenkins | v1.28.0 | 08 Jan 23 13:18 PST |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-130931   | kubernetes-upgrade-130931 | jenkins | v1.28.0 | 08 Jan 23 13:18 PST | 08 Jan 23 13:19 PST |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| ssh     | -p false-130508 pgrep -a       | false-130508              | jenkins | v1.28.0 | 08 Jan 23 13:18 PST | 08 Jan 23 13:18 PST |
	|         | kubelet                        |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/08 13:18:21
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.19.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 13:18:21.642920   16717 out.go:296] Setting OutFile to fd 1 ...
	I0108 13:18:21.643104   16717 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 13:18:21.643110   16717 out.go:309] Setting ErrFile to fd 2...
	I0108 13:18:21.643119   16717 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 13:18:21.643242   16717 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-2761/.minikube/bin
	I0108 13:18:21.643714   16717 out.go:303] Setting JSON to false
	I0108 13:18:21.663889   16717 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4674,"bootTime":1673208027,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0108 13:18:21.663991   16717 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0108 13:18:21.682336   16717 out.go:177] * [kubernetes-upgrade-130931] minikube v1.28.0 on Darwin 13.0.1
	I0108 13:18:21.703125   16717 notify.go:220] Checking for updates...
	I0108 13:18:21.713413   16717 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 13:18:21.758294   16717 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2761/kubeconfig
	I0108 13:18:21.775501   16717 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0108 13:18:21.808213   16717 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 13:18:21.828532   16717 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube
	I0108 13:18:21.851456   16717 config.go:180] Loaded profile config "kubernetes-upgrade-130931": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 13:18:21.851928   16717 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 13:18:21.918674   16717 docker.go:137] docker version: linux-20.10.21
	I0108 13:18:21.918843   16717 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 13:18:22.077883   16717 info.go:266] docker info: {ID:5ZGC:6Z7C:CMVQ:QZDS:ZWTS:M4CQ:373D:RAJX:R4QB:QI3D:27UQ:FR5B Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:56 SystemTime:2023-01-08 21:18:21.974965655 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0108 13:18:22.130969   16717 out.go:177] * Using the docker driver based on existing profile
	I0108 13:18:22.152249   16717 start.go:294] selected driver: docker
	I0108 13:18:22.152267   16717 start.go:838] validating driver "docker" against &{Name:kubernetes-upgrade-130931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-130931 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 13:18:22.152356   16717 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 13:18:22.156117   16717 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 13:18:22.313145   16717 info.go:266] docker info: {ID:5ZGC:6Z7C:CMVQ:QZDS:ZWTS:M4CQ:373D:RAJX:R4QB:QI3D:27UQ:FR5B Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-08 21:18:22.211739573 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0108 13:18:22.313292   16717 cni.go:95] Creating CNI manager for ""
	I0108 13:18:22.313307   16717 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0108 13:18:22.313320   16717 start_flags.go:317] config:
	{Name:kubernetes-upgrade-130931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-130931 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmn
et/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 13:18:22.336745   16717 out.go:177] * Starting control plane node kubernetes-upgrade-130931 in cluster kubernetes-upgrade-130931
	I0108 13:18:22.358037   16717 cache.go:120] Beginning downloading kic base image for docker with docker
	I0108 13:18:22.380961   16717 out.go:177] * Pulling base image ...
	I0108 13:18:22.456166   16717 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0108 13:18:22.456188   16717 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 13:18:22.456296   16717 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I0108 13:18:22.456337   16717 cache.go:57] Caching tarball of preloaded images
	I0108 13:18:22.456689   16717 preload.go:174] Found /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 13:18:22.456718   16717 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I0108 13:18:22.457967   16717 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/config.json ...
	I0108 13:18:22.520237   16717 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 13:18:22.520253   16717 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 13:18:22.520269   16717 cache.go:193] Successfully downloaded all kic artifacts
	I0108 13:18:22.520420   16717 start.go:364] acquiring machines lock for kubernetes-upgrade-130931: {Name:mk4f787016e0e08c82b479a38e6950a2596b9276 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 13:18:22.520516   16717 start.go:368] acquired machines lock for "kubernetes-upgrade-130931" in 72.631µs
	I0108 13:18:22.520543   16717 start.go:96] Skipping create...Using existing machine configuration
	I0108 13:18:22.520551   16717 fix.go:55] fixHost starting: 
	I0108 13:18:22.520827   16717 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-130931 --format={{.State.Status}}
	I0108 13:18:22.584483   16717 fix.go:103] recreateIfNeeded on kubernetes-upgrade-130931: state=Running err=<nil>
	W0108 13:18:22.584510   16717 fix.go:129] unexpected machine state, will restart: <nil>
	I0108 13:18:22.606884   16717 out.go:177] * Updating the running docker "kubernetes-upgrade-130931" container ...
	I0108 13:18:22.055945   16646 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-130508:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir: (7.308119377s)
	I0108 13:18:22.055967   16646 kic.go:188] duration metric: took 7.308402 seconds to extract preloaded images to volume
	I0108 13:18:22.056098   16646 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0108 13:18:22.257520   16646 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname false-130508 --name false-130508 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-130508 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=false-130508 --network false-130508 --ip 192.168.67.2 --volume false-130508:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c
	I0108 13:18:22.647729   16717 machine.go:88] provisioning docker machine ...
	I0108 13:18:22.647771   16717 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-130931"
	I0108 13:18:22.647889   16717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-130931
	I0108 13:18:22.710047   16717 main.go:134] libmachine: Using SSH client type: native
	I0108 13:18:22.710309   16717 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 52782 <nil> <nil>}
	I0108 13:18:22.710322   16717 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-130931 && echo "kubernetes-upgrade-130931" | sudo tee /etc/hostname
	I0108 13:18:22.849931   16717 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-130931
	
	I0108 13:18:22.850038   16717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-130931
	I0108 13:18:22.918903   16717 main.go:134] libmachine: Using SSH client type: native
	I0108 13:18:22.919078   16717 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 52782 <nil> <nil>}
	I0108 13:18:22.919094   16717 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-130931' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-130931/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-130931' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 13:18:23.044561   16717 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 13:18:23.044593   16717 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-2761/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-2761/.minikube}
	I0108 13:18:23.044631   16717 ubuntu.go:177] setting up certificates
	I0108 13:18:23.044648   16717 provision.go:83] configureAuth start
	I0108 13:18:23.044759   16717 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-130931
	I0108 13:18:23.115729   16717 provision.go:138] copyHostCerts
	I0108 13:18:23.115863   16717 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem, removing ...
	I0108 13:18:23.115877   16717 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem
	I0108 13:18:23.116037   16717 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem (1082 bytes)
	I0108 13:18:23.116268   16717 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem, removing ...
	I0108 13:18:23.116275   16717 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem
	I0108 13:18:23.116355   16717 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem (1123 bytes)
	I0108 13:18:23.116547   16717 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem, removing ...
	I0108 13:18:23.116558   16717 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem
	I0108 13:18:23.116669   16717 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem (1675 bytes)
	I0108 13:18:23.116826   16717 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-130931 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-130931]
	I0108 13:18:23.402052   16717 provision.go:172] copyRemoteCerts
	I0108 13:18:23.402173   16717 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 13:18:23.402246   16717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-130931
	I0108 13:18:23.468473   16717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52782 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/kubernetes-upgrade-130931/id_rsa Username:docker}
	I0108 13:18:23.556671   16717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 13:18:23.576175   16717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0108 13:18:23.597421   16717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 13:18:23.614837   16717 provision.go:86] duration metric: configureAuth took 570.174218ms
	I0108 13:18:23.614866   16717 ubuntu.go:193] setting minikube options for container-runtime
	I0108 13:18:23.615128   16717 config.go:180] Loaded profile config "kubernetes-upgrade-130931": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 13:18:23.615212   16717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-130931
	I0108 13:18:23.677936   16717 main.go:134] libmachine: Using SSH client type: native
	I0108 13:18:23.678105   16717 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 52782 <nil> <nil>}
	I0108 13:18:23.678114   16717 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0108 13:18:23.796533   16717 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0108 13:18:23.796548   16717 ubuntu.go:71] root file system type: overlay
	I0108 13:18:23.796701   16717 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0108 13:18:23.796849   16717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-130931
	I0108 13:18:23.859456   16717 main.go:134] libmachine: Using SSH client type: native
	I0108 13:18:23.859626   16717 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 52782 <nil> <nil>}
	I0108 13:18:23.859676   16717 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0108 13:18:23.986351   16717 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0108 13:18:23.986482   16717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-130931
	I0108 13:18:24.052415   16717 main.go:134] libmachine: Using SSH client type: native
	I0108 13:18:24.052603   16717 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 52782 <nil> <nil>}
	I0108 13:18:24.052620   16717 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0108 13:18:24.174660   16717 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 13:18:24.174677   16717 machine.go:91] provisioned docker machine in 1.52692642s
	I0108 13:18:24.174687   16717 start.go:300] post-start starting for "kubernetes-upgrade-130931" (driver="docker")
	I0108 13:18:24.174692   16717 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 13:18:24.174826   16717 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 13:18:24.174889   16717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-130931
	I0108 13:18:24.236588   16717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52782 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/kubernetes-upgrade-130931/id_rsa Username:docker}
	I0108 13:18:24.323857   16717 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 13:18:24.329028   16717 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 13:18:24.329064   16717 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 13:18:24.329073   16717 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 13:18:24.329080   16717 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 13:18:24.329089   16717 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2761/.minikube/addons for local assets ...
	I0108 13:18:24.329199   16717 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2761/.minikube/files for local assets ...
	I0108 13:18:24.329465   16717 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem -> 40832.pem in /etc/ssl/certs
	I0108 13:18:24.329714   16717 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 13:18:24.341335   16717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem --> /etc/ssl/certs/40832.pem (1708 bytes)
	I0108 13:18:24.361935   16717 start.go:303] post-start completed in 187.238128ms
	I0108 13:18:24.362023   16717 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 13:18:24.362089   16717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-130931
	I0108 13:18:24.425386   16717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52782 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/kubernetes-upgrade-130931/id_rsa Username:docker}
	I0108 13:18:24.510016   16717 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 13:18:24.515280   16717 fix.go:57] fixHost completed within 1.994718541s
	I0108 13:18:24.515294   16717 start.go:83] releasing machines lock for "kubernetes-upgrade-130931", held for 1.994762363s
	I0108 13:18:24.515387   16717 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-130931
	I0108 13:18:24.582070   16717 ssh_runner.go:195] Run: cat /version.json
	I0108 13:18:24.582079   16717 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 13:18:24.582152   16717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-130931
	I0108 13:18:24.582165   16717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-130931
	I0108 13:18:24.647420   16717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52782 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/kubernetes-upgrade-130931/id_rsa Username:docker}
	I0108 13:18:24.647636   16717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52782 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/kubernetes-upgrade-130931/id_rsa Username:docker}
	I0108 13:18:24.792536   16717 ssh_runner.go:195] Run: systemctl --version
	I0108 13:18:24.797660   16717 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0108 13:18:24.808341   16717 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0108 13:18:24.808421   16717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 13:18:24.818948   16717 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 13:18:24.832710   16717 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0108 13:18:24.920764   16717 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0108 13:18:25.008655   16717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 13:18:25.100451   16717 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 13:18:22.786467   16646 cli_runner.go:164] Run: docker container inspect false-130508 --format={{.State.Running}}
	I0108 13:18:22.848402   16646 cli_runner.go:164] Run: docker container inspect false-130508 --format={{.State.Status}}
	I0108 13:18:22.917995   16646 cli_runner.go:164] Run: docker exec false-130508 stat /var/lib/dpkg/alternatives/iptables
	I0108 13:18:23.037949   16646 oci.go:144] the created container "false-130508" has a running status.
	I0108 13:18:23.037977   16646 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/false-130508/id_rsa...
	I0108 13:18:23.153727   16646 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/false-130508/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0108 13:18:23.274521   16646 cli_runner.go:164] Run: docker container inspect false-130508 --format={{.State.Status}}
	I0108 13:18:23.345555   16646 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0108 13:18:23.345580   16646 kic_runner.go:114] Args: [docker exec --privileged false-130508 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0108 13:18:23.463597   16646 cli_runner.go:164] Run: docker container inspect false-130508 --format={{.State.Status}}
	I0108 13:18:23.524690   16646 machine.go:88] provisioning docker machine ...
	I0108 13:18:23.524739   16646 ubuntu.go:169] provisioning hostname "false-130508"
	I0108 13:18:23.524862   16646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-130508
	I0108 13:18:23.587611   16646 main.go:134] libmachine: Using SSH client type: native
	I0108 13:18:23.587800   16646 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 53276 <nil> <nil>}
	I0108 13:18:23.587816   16646 main.go:134] libmachine: About to run SSH command:
	sudo hostname false-130508 && echo "false-130508" | sudo tee /etc/hostname
	I0108 13:18:23.717852   16646 main.go:134] libmachine: SSH cmd err, output: <nil>: false-130508
	
	I0108 13:18:23.717969   16646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-130508
	I0108 13:18:23.781417   16646 main.go:134] libmachine: Using SSH client type: native
	I0108 13:18:23.781576   16646 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 53276 <nil> <nil>}
	I0108 13:18:23.781593   16646 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfalse-130508' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 false-130508/g' /etc/hosts;
				else 
					echo '127.0.1.1 false-130508' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 13:18:23.901896   16646 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 13:18:23.901917   16646 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-2761/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-2761/.minikube}
	I0108 13:18:23.901944   16646 ubuntu.go:177] setting up certificates
	I0108 13:18:23.901957   16646 provision.go:83] configureAuth start
	I0108 13:18:23.902062   16646 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-130508
	I0108 13:18:23.963460   16646 provision.go:138] copyHostCerts
	I0108 13:18:23.963563   16646 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem, removing ...
	I0108 13:18:23.963572   16646 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem
	I0108 13:18:23.963665   16646 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem (1082 bytes)
	I0108 13:18:23.963863   16646 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem, removing ...
	I0108 13:18:23.963870   16646 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem
	I0108 13:18:23.963929   16646 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem (1123 bytes)
	I0108 13:18:23.964099   16646 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem, removing ...
	I0108 13:18:23.964105   16646 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem
	I0108 13:18:23.964163   16646 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem (1675 bytes)
	I0108 13:18:23.964291   16646 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem org=jenkins.false-130508 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube false-130508]
	I0108 13:18:24.106747   16646 provision.go:172] copyRemoteCerts
	I0108 13:18:24.106818   16646 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 13:18:24.106888   16646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-130508
	I0108 13:18:24.168264   16646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53276 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/false-130508/id_rsa Username:docker}
	I0108 13:18:24.254620   16646 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 13:18:24.272910   16646 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0108 13:18:24.290670   16646 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 13:18:24.308488   16646 provision.go:86] duration metric: configureAuth took 406.508277ms
	I0108 13:18:24.308503   16646 ubuntu.go:193] setting minikube options for container-runtime
	I0108 13:18:24.308659   16646 config.go:180] Loaded profile config "false-130508": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 13:18:24.308744   16646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-130508
	I0108 13:18:24.377521   16646 main.go:134] libmachine: Using SSH client type: native
	I0108 13:18:24.377678   16646 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 53276 <nil> <nil>}
	I0108 13:18:24.377694   16646 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0108 13:18:24.497111   16646 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0108 13:18:24.497126   16646 ubuntu.go:71] root file system type: overlay
	I0108 13:18:24.497258   16646 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0108 13:18:24.497352   16646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-130508
	I0108 13:18:24.564080   16646 main.go:134] libmachine: Using SSH client type: native
	I0108 13:18:24.564236   16646 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 53276 <nil> <nil>}
	I0108 13:18:24.564291   16646 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0108 13:18:24.695117   16646 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0108 13:18:24.695243   16646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-130508
	I0108 13:18:24.757580   16646 main.go:134] libmachine: Using SSH client type: native
	I0108 13:18:24.757730   16646 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 53276 <nil> <nil>}
	I0108 13:18:24.757744   16646 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0108 13:18:25.387121   16646 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-10-25 18:00:04.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-08 21:18:24.692589533 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0108 13:18:25.387173   16646 machine.go:91] provisioned docker machine in 1.862432758s
	I0108 13:18:25.387180   16646 client.go:171] LocalClient.Create took 11.632392703s
	I0108 13:18:25.387199   16646 start.go:167] duration metric: libmachine.API.Create for "false-130508" took 11.63244652s
	I0108 13:18:25.387209   16646 start.go:300] post-start starting for "false-130508" (driver="docker")
	I0108 13:18:25.387214   16646 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 13:18:25.387309   16646 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 13:18:25.387396   16646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-130508
	I0108 13:18:25.449053   16646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53276 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/false-130508/id_rsa Username:docker}
	I0108 13:18:25.537439   16646 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 13:18:25.541387   16646 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 13:18:25.541404   16646 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 13:18:25.541415   16646 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 13:18:25.541421   16646 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 13:18:25.541433   16646 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2761/.minikube/addons for local assets ...
	I0108 13:18:25.541529   16646 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2761/.minikube/files for local assets ...
	I0108 13:18:25.541714   16646 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem -> 40832.pem in /etc/ssl/certs
	I0108 13:18:25.541915   16646 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 13:18:25.549764   16646 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem --> /etc/ssl/certs/40832.pem (1708 bytes)
	I0108 13:18:25.567833   16646 start.go:303] post-start completed in 180.615532ms
	I0108 13:18:25.568378   16646 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-130508
	I0108 13:18:25.627281   16646 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/config.json ...
	I0108 13:18:25.627720   16646 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 13:18:25.627789   16646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-130508
	I0108 13:18:25.689017   16646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53276 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/false-130508/id_rsa Username:docker}
	I0108 13:18:25.773430   16646 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 13:18:25.778045   16646 start.go:128] duration metric: createHost completed in 12.065994034s
	I0108 13:18:25.778068   16646 start.go:83] releasing machines lock for "false-130508", held for 12.066120916s
	I0108 13:18:25.778184   16646 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-130508
	I0108 13:18:25.838712   16646 ssh_runner.go:195] Run: cat /version.json
	I0108 13:18:25.838721   16646 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 13:18:25.838800   16646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-130508
	I0108 13:18:25.838806   16646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-130508
	I0108 13:18:25.907961   16646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53276 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/false-130508/id_rsa Username:docker}
	I0108 13:18:25.908162   16646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53276 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/false-130508/id_rsa Username:docker}
	I0108 13:18:26.052210   16646 ssh_runner.go:195] Run: systemctl --version
	I0108 13:18:26.058196   16646 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0108 13:18:26.069486   16646 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0108 13:18:26.069648   16646 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 13:18:26.080779   16646 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 13:18:26.095874   16646 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0108 13:18:26.169371   16646 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0108 13:18:26.238245   16646 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 13:18:26.308983   16646 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 13:18:26.549458   16646 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0108 13:18:26.631469   16646 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 13:18:26.701437   16646 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0108 13:18:26.713609   16646 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0108 13:18:26.713794   16646 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0108 13:18:26.718110   16646 start.go:472] Will wait 60s for crictl version
	I0108 13:18:26.718160   16646 ssh_runner.go:195] Run: sudo crictl version
	I0108 13:18:26.821800   16646 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.21
	RuntimeApiVersion:  1.41.0
	I0108 13:18:26.821893   16646 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 13:18:26.851633   16646 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 13:18:26.922347   16646 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
	I0108 13:18:26.922503   16646 cli_runner.go:164] Run: docker exec -t false-130508 dig +short host.docker.internal
	I0108 13:18:27.039163   16646 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0108 13:18:27.039307   16646 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0108 13:18:27.044374   16646 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 13:18:27.058040   16646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" false-130508
	I0108 13:18:27.121731   16646 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0108 13:18:27.121820   16646 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 13:18:27.147409   16646 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0108 13:18:27.147429   16646 docker.go:543] Images already preloaded, skipping extraction
	I0108 13:18:27.147562   16646 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 13:18:27.175305   16646 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0108 13:18:27.175324   16646 cache_images.go:84] Images are preloaded, skipping loading
	I0108 13:18:27.175432   16646 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0108 13:18:27.252433   16646 cni.go:95] Creating CNI manager for "false"
	I0108 13:18:27.252458   16646 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 13:18:27.252477   16646 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:false-130508 NodeName:false-130508 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 13:18:27.252591   16646 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "false-130508"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 13:18:27.252678   16646 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=false-130508 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:false-130508 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:}
	I0108 13:18:27.252754   16646 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0108 13:18:27.261447   16646 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 13:18:27.261535   16646 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 13:18:27.270104   16646 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (474 bytes)
	I0108 13:18:27.285516   16646 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 13:18:27.301435   16646 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2034 bytes)
	I0108 13:18:27.316187   16646 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0108 13:18:27.320903   16646 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 13:18:27.331936   16646 certs.go:54] Setting up /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508 for IP: 192.168.67.2
	I0108 13:18:27.332081   16646 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.key
	I0108 13:18:27.332158   16646 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.key
	I0108 13:18:27.332209   16646 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/client.key
	I0108 13:18:27.332232   16646 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/client.crt with IP's: []
	I0108 13:18:27.419772   16646 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/client.crt ...
	I0108 13:18:27.419786   16646 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/client.crt: {Name:mk934c8908bf50cab16d7c971cbe52f27bba93fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:18:27.420079   16646 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/client.key ...
	I0108 13:18:27.420087   16646 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/client.key: {Name:mkb140fae38d78b729949d039f2c052adcc02231 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:18:27.420295   16646 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/apiserver.key.c7fa3a9e
	I0108 13:18:27.420315   16646 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 13:18:27.497684   16646 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/apiserver.crt.c7fa3a9e ...
	I0108 13:18:27.497700   16646 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/apiserver.crt.c7fa3a9e: {Name:mk73818e46f582c1a7e562064bccc235e9a47101 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:18:27.498027   16646 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/apiserver.key.c7fa3a9e ...
	I0108 13:18:27.498037   16646 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/apiserver.key.c7fa3a9e: {Name:mka28332876a153882dd72f2b24ba1513feadb08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:18:27.498257   16646 certs.go:320] copying /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/apiserver.crt
	I0108 13:18:27.498455   16646 certs.go:324] copying /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/apiserver.key
	I0108 13:18:27.498647   16646 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/proxy-client.key
	I0108 13:18:27.498668   16646 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/proxy-client.crt with IP's: []
	I0108 13:18:27.713021   16646 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/proxy-client.crt ...
	I0108 13:18:27.732016   16646 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/proxy-client.crt: {Name:mk8780b1787f4d38354f048720cf9c4d5f86c65f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:18:27.732405   16646 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/proxy-client.key ...
	I0108 13:18:27.732416   16646 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/proxy-client.key: {Name:mk9c870e85a90b23a03baa4cb7802151cd5ccabd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:18:27.732952   16646 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083.pem (1338 bytes)
	W0108 13:18:27.733014   16646 certs.go:384] ignoring /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083_empty.pem, impossibly tiny 0 bytes
	I0108 13:18:27.733029   16646 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 13:18:27.733074   16646 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem (1082 bytes)
	I0108 13:18:27.733112   16646 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem (1123 bytes)
	I0108 13:18:27.733151   16646 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem (1675 bytes)
	I0108 13:18:27.733236   16646 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem (1708 bytes)
	I0108 13:18:27.733825   16646 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 13:18:27.753644   16646 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 13:18:27.772653   16646 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 13:18:27.792279   16646 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 13:18:27.812336   16646 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 13:18:27.832056   16646 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 13:18:27.851034   16646 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 13:18:27.869515   16646 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 13:18:27.888916   16646 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083.pem --> /usr/share/ca-certificates/4083.pem (1338 bytes)
	I0108 13:18:27.909665   16646 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem --> /usr/share/ca-certificates/40832.pem (1708 bytes)
	I0108 13:18:27.929716   16646 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 13:18:27.947738   16646 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 13:18:27.961318   16646 ssh_runner.go:195] Run: openssl version
	I0108 13:18:27.967007   16646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 13:18:27.975561   16646 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 13:18:27.980154   16646 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:27 /usr/share/ca-certificates/minikubeCA.pem
	I0108 13:18:27.980220   16646 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 13:18:27.985951   16646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 13:18:27.994427   16646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4083.pem && ln -fs /usr/share/ca-certificates/4083.pem /etc/ssl/certs/4083.pem"
	I0108 13:18:28.003443   16646 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4083.pem
	I0108 13:18:28.007775   16646 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:32 /usr/share/ca-certificates/4083.pem
	I0108 13:18:28.007824   16646 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4083.pem
	I0108 13:18:28.013619   16646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4083.pem /etc/ssl/certs/51391683.0"
	I0108 13:18:28.021815   16646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/40832.pem && ln -fs /usr/share/ca-certificates/40832.pem /etc/ssl/certs/40832.pem"
	I0108 13:18:28.030585   16646 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/40832.pem
	I0108 13:18:28.034812   16646 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:32 /usr/share/ca-certificates/40832.pem
	I0108 13:18:28.034878   16646 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/40832.pem
	I0108 13:18:28.040521   16646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/40832.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 13:18:28.048769   16646 kubeadm.go:396] StartCluster: {Name:false-130508 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:false-130508 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 13:18:28.048890   16646 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 13:18:28.072987   16646 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 13:18:28.081337   16646 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 13:18:28.089089   16646 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 13:18:28.089148   16646 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 13:18:28.096972   16646 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 13:18:28.096999   16646 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 13:18:28.143614   16646 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I0108 13:18:28.143671   16646 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 13:18:28.246116   16646 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 13:18:28.246209   16646 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 13:18:28.246307   16646 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 13:18:28.376497   16646 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 13:18:28.397027   16646 out.go:204]   - Generating certificates and keys ...
	I0108 13:18:28.397103   16646 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 13:18:28.397164   16646 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 13:18:28.608199   16646 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 13:18:28.777657   16646 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I0108 13:18:28.926024   16646 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I0108 13:18:28.991356   16646 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I0108 13:18:29.093716   16646 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I0108 13:18:29.093840   16646 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [false-130508 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0108 13:18:29.187694   16646 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I0108 13:18:29.187832   16646 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [false-130508 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0108 13:18:29.320147   16646 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 13:18:29.712203   16646 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 13:18:29.911662   16646 kubeadm.go:317] [certs] Generating "sa" key and public key
	I0108 13:18:29.911722   16646 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 13:18:30.100537   16646 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 13:18:30.201039   16646 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 13:18:30.253663   16646 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 13:18:30.581422   16646 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 13:18:30.592351   16646 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 13:18:30.593211   16646 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 13:18:30.593259   16646 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0108 13:18:30.666241   16646 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 13:18:30.690678   16646 out.go:204]   - Booting up control plane ...
	I0108 13:18:30.690791   16646 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 13:18:30.690873   16646 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 13:18:30.690953   16646 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 13:18:30.691035   16646 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 13:18:30.691159   16646 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 13:18:35.859297   16717 ssh_runner.go:235] Completed: sudo systemctl restart docker: (10.758779041s)
	I0108 13:18:35.859391   16717 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0108 13:18:35.937963   16717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 13:18:36.016033   16717 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0108 13:18:36.032136   16717 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0108 13:18:36.032231   16717 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0108 13:18:36.038729   16717 start.go:472] Will wait 60s for crictl version
	I0108 13:18:36.038809   16717 ssh_runner.go:195] Run: sudo crictl version
	I0108 13:18:36.119363   16717 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.21
	RuntimeApiVersion:  1.41.0
	I0108 13:18:36.119453   16717 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 13:18:36.152665   16717 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 13:18:36.242399   16717 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
	I0108 13:18:36.242615   16717 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-130931 dig +short host.docker.internal
	I0108 13:18:36.379536   16717 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0108 13:18:36.379664   16717 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0108 13:18:36.416087   16717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-130931
	I0108 13:18:36.490881   16717 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0108 13:18:36.490979   16717 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 13:18:36.541842   16717 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0108 13:18:36.541864   16717 docker.go:543] Images already preloaded, skipping extraction
	I0108 13:18:36.541980   16717 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 13:18:36.570475   16717 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0108 13:18:36.570495   16717 cache_images.go:84] Images are preloaded, skipping loading
	I0108 13:18:36.570594   16717 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0108 13:18:36.724826   16717 cni.go:95] Creating CNI manager for ""
	I0108 13:18:36.724844   16717 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0108 13:18:36.724860   16717 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 13:18:36.724898   16717 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-130931 NodeName:kubernetes-upgrade-130931 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 13:18:36.725031   16717 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-130931"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 13:18:36.725134   16717 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=kubernetes-upgrade-130931 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-130931 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 13:18:36.725214   16717 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0108 13:18:36.736012   16717 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 13:18:36.736094   16717 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 13:18:36.746341   16717 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (487 bytes)
	I0108 13:18:36.771074   16717 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 13:18:36.787516   16717 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2047 bytes)
	I0108 13:18:36.804723   16717 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0108 13:18:36.809875   16717 certs.go:54] Setting up /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931 for IP: 192.168.76.2
	I0108 13:18:36.810054   16717 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.key
	I0108 13:18:36.810134   16717 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.key
	I0108 13:18:36.810274   16717 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/client.key
	I0108 13:18:36.810395   16717 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/apiserver.key.31bdca25
	I0108 13:18:36.810515   16717 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/proxy-client.key
	I0108 13:18:36.810850   16717 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083.pem (1338 bytes)
	W0108 13:18:36.810901   16717 certs.go:384] ignoring /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083_empty.pem, impossibly tiny 0 bytes
	I0108 13:18:36.810915   16717 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 13:18:36.810969   16717 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem (1082 bytes)
	I0108 13:18:36.811013   16717 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem (1123 bytes)
	I0108 13:18:36.811059   16717 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem (1675 bytes)
	I0108 13:18:36.811160   16717 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem (1708 bytes)
	I0108 13:18:36.811948   16717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 13:18:36.835511   16717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 13:18:36.859951   16717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 13:18:36.888084   16717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 13:18:36.908459   16717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 13:18:36.933076   16717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 13:18:36.967293   16717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 13:18:36.997378   16717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 13:18:37.019406   16717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083.pem --> /usr/share/ca-certificates/4083.pem (1338 bytes)
	I0108 13:18:37.058915   16717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem --> /usr/share/ca-certificates/40832.pem (1708 bytes)
	I0108 13:18:37.084190   16717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 13:18:37.129841   16717 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 13:18:37.151302   16717 ssh_runner.go:195] Run: openssl version
	I0108 13:18:37.158287   16717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4083.pem && ln -fs /usr/share/ca-certificates/4083.pem /etc/ssl/certs/4083.pem"
	I0108 13:18:37.172421   16717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4083.pem
	I0108 13:18:37.180405   16717 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:32 /usr/share/ca-certificates/4083.pem
	I0108 13:18:37.180493   16717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4083.pem
	I0108 13:18:37.190058   16717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4083.pem /etc/ssl/certs/51391683.0"
	I0108 13:18:37.198738   16717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/40832.pem && ln -fs /usr/share/ca-certificates/40832.pem /etc/ssl/certs/40832.pem"
	I0108 13:18:37.207906   16717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/40832.pem
	I0108 13:18:37.212210   16717 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:32 /usr/share/ca-certificates/40832.pem
	I0108 13:18:37.212265   16717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/40832.pem
	I0108 13:18:37.219833   16717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/40832.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 13:18:37.229547   16717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 13:18:37.244166   16717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 13:18:37.251698   16717 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:27 /usr/share/ca-certificates/minikubeCA.pem
	I0108 13:18:37.251819   16717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 13:18:37.258878   16717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 13:18:37.268722   16717 kubeadm.go:396] StartCluster: {Name:kubernetes-upgrade-130931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-130931 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 13:18:37.268884   16717 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 13:18:37.297769   16717 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 13:18:37.307320   16717 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0108 13:18:37.307337   16717 kubeadm.go:627] restartCluster start
	I0108 13:18:37.307403   16717 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 13:18:37.315591   16717 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:18:37.315691   16717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-130931
	I0108 13:18:37.384999   16717 kubeconfig.go:92] found "kubernetes-upgrade-130931" server: "https://127.0.0.1:52786"
	I0108 13:18:37.385582   16717 kapi.go:59] client config for kubernetes-upgrade-130931: &rest.Config{Host:"https://127.0.0.1:52786", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 13:18:37.386130   16717 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 13:18:37.394365   16717 api_server.go:165] Checking apiserver status ...
	I0108 13:18:37.394425   16717 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:18:37.404909   16717 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/12670/cgroup
	W0108 13:18:37.413354   16717 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/12670/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:18:37.413419   16717 ssh_runner.go:195] Run: ls
	I0108 13:18:37.421155   16717 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52786/healthz ...
	I0108 13:18:41.683842   16646 kubeadm.go:317] [apiclient] All control plane components are healthy after 11.010110 seconds
	I0108 13:18:41.683986   16646 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 13:18:41.693584   16646 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 13:18:42.207104   16646 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 13:18:42.207256   16646 kubeadm.go:317] [mark-control-plane] Marking the node false-130508 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 13:18:42.714259   16646 kubeadm.go:317] [bootstrap-token] Using token: u3y9mb.sjcein6sewrz9n8g
	I0108 13:18:42.421657   16717 api_server.go:268] stopped: https://127.0.0.1:52786/healthz: Get "https://127.0.0.1:52786/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0108 13:18:42.421716   16717 retry.go:31] will retry after 263.082536ms: state is "Stopped"
	I0108 13:18:42.686069   16717 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52786/healthz ...
	I0108 13:18:42.783373   16646 out.go:204]   - Configuring RBAC rules ...
	I0108 13:18:42.783556   16646 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 13:18:42.789421   16646 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 13:18:42.793460   16646 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 13:18:42.795448   16646 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 13:18:42.797484   16646 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 13:18:42.799124   16646 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 13:18:42.805577   16646 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 13:18:42.960204   16646 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0108 13:18:43.221590   16646 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0108 13:18:43.222364   16646 kubeadm.go:317] 
	I0108 13:18:43.222448   16646 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0108 13:18:43.222459   16646 kubeadm.go:317] 
	I0108 13:18:43.222552   16646 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0108 13:18:43.222560   16646 kubeadm.go:317] 
	I0108 13:18:43.222585   16646 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0108 13:18:43.223263   16646 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 13:18:43.223327   16646 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 13:18:43.223337   16646 kubeadm.go:317] 
	I0108 13:18:43.223383   16646 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0108 13:18:43.223411   16646 kubeadm.go:317] 
	I0108 13:18:43.223528   16646 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 13:18:43.223555   16646 kubeadm.go:317] 
	I0108 13:18:43.223642   16646 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0108 13:18:43.223772   16646 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 13:18:43.223853   16646 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 13:18:43.223866   16646 kubeadm.go:317] 
	I0108 13:18:43.223961   16646 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 13:18:43.224083   16646 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0108 13:18:43.224096   16646 kubeadm.go:317] 
	I0108 13:18:43.224169   16646 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token u3y9mb.sjcein6sewrz9n8g \
	I0108 13:18:43.224263   16646 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f \
	I0108 13:18:43.224308   16646 kubeadm.go:317] 	--control-plane 
	I0108 13:18:43.224324   16646 kubeadm.go:317] 
	I0108 13:18:43.224448   16646 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0108 13:18:43.224459   16646 kubeadm.go:317] 
	I0108 13:18:43.224544   16646 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token u3y9mb.sjcein6sewrz9n8g \
	I0108 13:18:43.224654   16646 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:6a958d331c801cf164f5d649887955d67eefc766e2918f2676e098c5bfced57f 
	I0108 13:18:43.227291   16646 kubeadm.go:317] W0108 21:18:28.136198    1044 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0108 13:18:43.227445   16646 kubeadm.go:317] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0108 13:18:43.227512   16646 kubeadm.go:317] 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0108 13:18:43.227634   16646 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 13:18:43.227646   16646 cni.go:95] Creating CNI manager for "false"
	I0108 13:18:43.227661   16646 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 13:18:43.227765   16646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286 minikube.k8s.io/name=false-130508 minikube.k8s.io/updated_at=2023_01_08T13_18_43_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 13:18:43.227774   16646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 13:18:43.237486   16646 ops.go:34] apiserver oom_adj: -16
	I0108 13:18:43.425396   16646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 13:18:43.989441   16646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 13:18:44.489307   16646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 13:18:44.989501   16646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 13:18:45.489359   16646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 13:18:45.989981   16646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 13:18:46.490073   16646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 13:18:46.990427   16646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 13:18:47.490041   16646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 13:18:47.688022   16717 api_server.go:268] stopped: https://127.0.0.1:52786/healthz: Get "https://127.0.0.1:52786/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0108 13:18:47.688070   16717 retry.go:31] will retry after 381.329545ms: state is "Stopped"
	I0108 13:18:48.070080   16717 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52786/healthz ...
	I0108 13:18:47.989211   16646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 13:18:48.489594   16646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 13:18:48.989760   16646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 13:18:49.489775   16646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 13:18:49.989100   16646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 13:18:50.489267   16646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 13:18:50.989248   16646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 13:18:51.490381   16646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 13:18:51.989130   16646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 13:18:52.489489   16646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 13:18:52.989072   16646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 13:18:53.489375   16646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 13:18:53.989321   16646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 13:18:54.489112   16646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 13:18:54.989881   16646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 13:18:55.065578   16646 kubeadm.go:1067] duration metric: took 11.8378378s to wait for elevateKubeSystemPrivileges.
	I0108 13:18:55.065596   16646 kubeadm.go:398] StartCluster complete in 27.016716156s
	I0108 13:18:55.065619   16646 settings.go:142] acquiring lock: {Name:mkc40aeb9f069e96cc5c51255984662f0292a058 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:18:55.065709   16646 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15565-2761/kubeconfig
	I0108 13:18:55.067360   16646 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/kubeconfig: {Name:mk71550ab701dee908d8134473648649a6392238 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:18:55.584573   16646 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "false-130508" rescaled to 1
	I0108 13:18:55.584611   16646 start.go:212] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0108 13:18:55.584626   16646 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 13:18:55.584662   16646 addons.go:486] enableAddons start: toEnable=map[], additional=[]
	I0108 13:18:55.584809   16646 config.go:180] Loaded profile config "false-130508": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 13:18:55.624452   16646 out.go:177] * Verifying Kubernetes components...
	I0108 13:18:55.624564   16646 addons.go:65] Setting storage-provisioner=true in profile "false-130508"
	I0108 13:18:55.624566   16646 addons.go:65] Setting default-storageclass=true in profile "false-130508"
	I0108 13:18:55.697708   16646 addons.go:227] Setting addon storage-provisioner=true in "false-130508"
	I0108 13:18:55.697723   16646 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "false-130508"
	W0108 13:18:55.697729   16646 addons.go:236] addon storage-provisioner should already be in state true
	I0108 13:18:55.697828   16646 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 13:18:55.697838   16646 host.go:66] Checking if "false-130508" exists ...
	I0108 13:18:55.698462   16646 cli_runner.go:164] Run: docker container inspect false-130508 --format={{.State.Status}}
	I0108 13:18:55.698586   16646 cli_runner.go:164] Run: docker container inspect false-130508 --format={{.State.Status}}
	I0108 13:18:55.715035   16646 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 13:18:55.721995   16646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" false-130508
	I0108 13:18:55.809577   16646 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 13:18:53.070820   16717 api_server.go:268] stopped: https://127.0.0.1:52786/healthz: Get "https://127.0.0.1:52786/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0108 13:18:53.271188   16717 api_server.go:165] Checking apiserver status ...
	I0108 13:18:53.271308   16717 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:18:53.282606   16717 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/12670/cgroup
	W0108 13:18:53.290794   16717 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/12670/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:18:53.290851   16717 ssh_runner.go:195] Run: ls
	I0108 13:18:53.294919   16717 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52786/healthz ...
	I0108 13:18:55.640669   16717 api_server.go:278] https://127.0.0.1:52786/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 13:18:55.640698   16717 retry.go:31] will retry after 242.214273ms: https://127.0.0.1:52786/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 13:18:55.882988   16717 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52786/healthz ...
	I0108 13:18:55.891551   16717 api_server.go:278] https://127.0.0.1:52786/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 13:18:55.891575   16717 retry.go:31] will retry after 300.724609ms: https://127.0.0.1:52786/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 13:18:56.192929   16717 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52786/healthz ...
	I0108 13:18:56.198606   16717 api_server.go:278] https://127.0.0.1:52786/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 13:18:56.198625   16717 retry.go:31] will retry after 427.113882ms: https://127.0.0.1:52786/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 13:18:56.625842   16717 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52786/healthz ...
	I0108 13:18:55.832674   16646 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 13:18:55.832713   16646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 13:18:55.832932   16646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-130508
	I0108 13:18:55.837712   16646 addons.go:227] Setting addon default-storageclass=true in "false-130508"
	W0108 13:18:55.837737   16646 addons.go:236] addon default-storageclass should already be in state true
	I0108 13:18:55.837764   16646 host.go:66] Checking if "false-130508" exists ...
	I0108 13:18:55.838445   16646 cli_runner.go:164] Run: docker container inspect false-130508 --format={{.State.Status}}
	I0108 13:18:55.846589   16646 node_ready.go:35] waiting up to 5m0s for node "false-130508" to be "Ready" ...
	I0108 13:18:55.852853   16646 node_ready.go:49] node "false-130508" has status "Ready":"True"
	I0108 13:18:55.852868   16646 node_ready.go:38] duration metric: took 6.242033ms waiting for node "false-130508" to be "Ready" ...
	I0108 13:18:55.852886   16646 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 13:18:55.860746   16646 pod_ready.go:78] waiting up to 5m0s for pod "etcd-false-130508" in "kube-system" namespace to be "Ready" ...
	I0108 13:18:55.869367   16646 pod_ready.go:92] pod "etcd-false-130508" in "kube-system" namespace has status "Ready":"True"
	I0108 13:18:55.869385   16646 pod_ready.go:81] duration metric: took 8.617284ms waiting for pod "etcd-false-130508" in "kube-system" namespace to be "Ready" ...
	I0108 13:18:55.869400   16646 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-false-130508" in "kube-system" namespace to be "Ready" ...
	I0108 13:18:55.913254   16646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53276 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/false-130508/id_rsa Username:docker}
	I0108 13:18:55.916461   16646 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 13:18:55.916474   16646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 13:18:55.916556   16646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-130508
	I0108 13:18:55.989448   16646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53276 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/false-130508/id_rsa Username:docker}
	I0108 13:18:56.036359   16646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 13:18:56.132909   16646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 13:18:57.018408   16646 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.303327784s)
	I0108 13:18:57.018435   16646 start.go:826] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0108 13:18:57.189724   16646 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.153322462s)
	I0108 13:18:57.189730   16646 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.056794163s)
	I0108 13:18:57.235055   16646 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0108 13:18:57.272112   16646 addons.go:488] enableAddons completed in 1.687435167s
	I0108 13:18:57.422478   16646 pod_ready.go:92] pod "kube-apiserver-false-130508" in "kube-system" namespace has status "Ready":"True"
	I0108 13:18:57.422494   16646 pod_ready.go:81] duration metric: took 1.55308017s waiting for pod "kube-apiserver-false-130508" in "kube-system" namespace to be "Ready" ...
	I0108 13:18:57.422511   16646 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-false-130508" in "kube-system" namespace to be "Ready" ...
	I0108 13:18:57.428031   16646 pod_ready.go:92] pod "kube-controller-manager-false-130508" in "kube-system" namespace has status "Ready":"True"
	I0108 13:18:57.428042   16646 pod_ready.go:81] duration metric: took 5.521366ms waiting for pod "kube-controller-manager-false-130508" in "kube-system" namespace to be "Ready" ...
	I0108 13:18:57.428048   16646 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-ftsqg" in "kube-system" namespace to be "Ready" ...
	I0108 13:18:57.938854   16646 pod_ready.go:92] pod "kube-proxy-ftsqg" in "kube-system" namespace has status "Ready":"True"
	I0108 13:18:57.938871   16646 pod_ready.go:81] duration metric: took 510.815282ms waiting for pod "kube-proxy-ftsqg" in "kube-system" namespace to be "Ready" ...
	I0108 13:18:57.938886   16646 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-false-130508" in "kube-system" namespace to be "Ready" ...
	I0108 13:18:57.944476   16646 pod_ready.go:92] pod "kube-scheduler-false-130508" in "kube-system" namespace has status "Ready":"True"
	I0108 13:18:57.944488   16646 pod_ready.go:81] duration metric: took 5.591869ms waiting for pod "kube-scheduler-false-130508" in "kube-system" namespace to be "Ready" ...
	I0108 13:18:57.944494   16646 pod_ready.go:38] duration metric: took 2.091578407s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 13:18:57.944514   16646 api_server.go:51] waiting for apiserver process to appear ...
	I0108 13:18:57.944577   16646 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:18:57.955895   16646 api_server.go:71] duration metric: took 2.371253841s to wait for apiserver process to appear ...
	I0108 13:18:57.955909   16646 api_server.go:87] waiting for apiserver healthz status ...
	I0108 13:18:57.955922   16646 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53280/healthz ...
	I0108 13:18:57.962098   16646 api_server.go:278] https://127.0.0.1:53280/healthz returned 200:
	ok
	I0108 13:18:57.963617   16646 api_server.go:140] control plane version: v1.25.3
	I0108 13:18:57.963631   16646 api_server.go:130] duration metric: took 7.716839ms to wait for apiserver health ...
	I0108 13:18:57.963636   16646 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 13:18:58.053527   16646 system_pods.go:59] 8 kube-system pods found
	I0108 13:18:58.053547   16646 system_pods.go:61] "coredns-565d847f94-5fzcw" [a8c93daf-0378-48e1-b5f0-ad6ce56cd610] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 13:18:58.053552   16646 system_pods.go:61] "coredns-565d847f94-vpqmj" [46ba4b42-ffc2-4d15-81b5-0e925090bdf9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 13:18:58.053556   16646 system_pods.go:61] "etcd-false-130508" [dd1fd6f9-20ea-4fc7-959a-45a73211143c] Running
	I0108 13:18:58.053561   16646 system_pods.go:61] "kube-apiserver-false-130508" [e3ece47d-1ee2-4c17-a43e-1062fad9b8cf] Running
	I0108 13:18:58.053565   16646 system_pods.go:61] "kube-controller-manager-false-130508" [741d9c82-2eaf-4a36-9e48-adf48352eef0] Running
	I0108 13:18:58.053576   16646 system_pods.go:61] "kube-proxy-ftsqg" [ce7bb3da-e5cf-43d6-9608-fb458e865432] Running
	I0108 13:18:58.053580   16646 system_pods.go:61] "kube-scheduler-false-130508" [f99a9d1f-0907-4ac3-ae78-37124e211a41] Running
	I0108 13:18:58.053584   16646 system_pods.go:61] "storage-provisioner" [3d0f2d2f-2b62-4cb2-9bdb-3a667a453aed] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0108 13:18:58.053588   16646 system_pods.go:74] duration metric: took 89.947989ms to wait for pod list to return data ...
	I0108 13:18:58.053594   16646 default_sa.go:34] waiting for default service account to be created ...
	I0108 13:18:58.249775   16646 default_sa.go:45] found service account: "default"
	I0108 13:18:58.249788   16646 default_sa.go:55] duration metric: took 196.188114ms for default service account to be created ...
	I0108 13:18:58.249795   16646 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 13:18:58.453075   16646 system_pods.go:86] 8 kube-system pods found
	I0108 13:18:58.453093   16646 system_pods.go:89] "coredns-565d847f94-5fzcw" [a8c93daf-0378-48e1-b5f0-ad6ce56cd610] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 13:18:58.453099   16646 system_pods.go:89] "coredns-565d847f94-vpqmj" [46ba4b42-ffc2-4d15-81b5-0e925090bdf9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 13:18:58.453103   16646 system_pods.go:89] "etcd-false-130508" [dd1fd6f9-20ea-4fc7-959a-45a73211143c] Running
	I0108 13:18:58.453107   16646 system_pods.go:89] "kube-apiserver-false-130508" [e3ece47d-1ee2-4c17-a43e-1062fad9b8cf] Running
	I0108 13:18:58.453111   16646 system_pods.go:89] "kube-controller-manager-false-130508" [741d9c82-2eaf-4a36-9e48-adf48352eef0] Running
	I0108 13:18:58.453115   16646 system_pods.go:89] "kube-proxy-ftsqg" [ce7bb3da-e5cf-43d6-9608-fb458e865432] Running
	I0108 13:18:58.453119   16646 system_pods.go:89] "kube-scheduler-false-130508" [f99a9d1f-0907-4ac3-ae78-37124e211a41] Running
	I0108 13:18:58.453127   16646 system_pods.go:89] "storage-provisioner" [3d0f2d2f-2b62-4cb2-9bdb-3a667a453aed] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0108 13:18:58.453135   16646 system_pods.go:126] duration metric: took 203.335574ms to wait for k8s-apps to be running ...
	I0108 13:18:58.453140   16646 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 13:18:58.453206   16646 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 13:18:58.468360   16646 system_svc.go:56] duration metric: took 15.215733ms WaitForService to wait for kubelet.
	I0108 13:18:58.468375   16646 kubeadm.go:573] duration metric: took 2.883734822s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 13:18:58.468387   16646 node_conditions.go:102] verifying NodePressure condition ...
	I0108 13:18:58.649633   16646 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0108 13:18:58.649651   16646 node_conditions.go:123] node cpu capacity is 6
	I0108 13:18:58.649660   16646 node_conditions.go:105] duration metric: took 181.268995ms to run NodePressure ...
	I0108 13:18:58.649689   16646 start.go:217] waiting for startup goroutines ...
	I0108 13:18:58.650052   16646 ssh_runner.go:195] Run: rm -f paused
	I0108 13:18:58.693980   16646 start.go:536] kubectl: 1.25.2, cluster: 1.25.3 (minor skew: 0)
	I0108 13:18:58.737585   16646 out.go:177] * Done! kubectl is now configured to use "false-130508" cluster and "default" namespace by default
	I0108 13:18:56.645567   16717 api_server.go:278] https://127.0.0.1:52786/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 13:18:56.667412   16717 retry.go:31] will retry after 382.2356ms: https://127.0.0.1:52786/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 13:18:57.049839   16717 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52786/healthz ...
	I0108 13:18:57.057077   16717 api_server.go:278] https://127.0.0.1:52786/healthz returned 200:
	ok
	I0108 13:18:57.070843   16717 system_pods.go:86] 5 kube-system pods found
	I0108 13:18:57.070860   16717 system_pods.go:89] "etcd-kubernetes-upgrade-130931" [6cb6bbc5-6315-4c4b-b3dd-69d47b773b46] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 13:18:57.070866   16717 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-130931" [36f9474d-f299-418f-ae5c-2f63c9e10675] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 13:18:57.070876   16717 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-130931" [7b5081e3-a142-438c-a3bd-20691919358d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 13:18:57.070889   16717 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-130931" [46210176-3f8f-49f3-9172-09ac9a1c32ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 13:18:57.070896   16717 system_pods.go:89] "storage-provisioner" [f4883686-825c-494b-bf5f-69baf3efa5bc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 13:18:57.070905   16717 kubeadm.go:611] needs reconfigure: missing components: kube-dns, kube-proxy
	I0108 13:18:57.070913   16717 kubeadm.go:1114] stopping kube-system containers ...
	I0108 13:18:57.070991   16717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 13:18:57.097666   16717 docker.go:444] Stopping containers: [c91ed258ea56 761bc79c537c 25e409f1070a a7cb262098d6 fd72ec15e751 a423e7104199 e87c699e5332 7c41f535b9c1 695f66b32d40 e409c8430bba c59d374b96dc ddfbed491ee3 c19c01cfeb90 f9570a96d4e5 c23dbf1e4439 9109ee28a07f 9f912cd2014b f06594a98ffe fc470a0677ac]
	I0108 13:18:57.097784   16717 ssh_runner.go:195] Run: docker stop c91ed258ea56 761bc79c537c 25e409f1070a a7cb262098d6 fd72ec15e751 a423e7104199 e87c699e5332 7c41f535b9c1 695f66b32d40 e409c8430bba c59d374b96dc ddfbed491ee3 c19c01cfeb90 f9570a96d4e5 c23dbf1e4439 9109ee28a07f 9f912cd2014b f06594a98ffe fc470a0677ac
	I0108 13:18:58.376956   16717 ssh_runner.go:235] Completed: docker stop c91ed258ea56 761bc79c537c 25e409f1070a a7cb262098d6 fd72ec15e751 a423e7104199 e87c699e5332 7c41f535b9c1 695f66b32d40 e409c8430bba c59d374b96dc ddfbed491ee3 c19c01cfeb90 f9570a96d4e5 c23dbf1e4439 9109ee28a07f 9f912cd2014b f06594a98ffe fc470a0677ac: (1.279130702s)
	I0108 13:18:58.377095   16717 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 13:18:58.456517   16717 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 13:18:58.519320   16717 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jan  8 21:18 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jan  8 21:18 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Jan  8 21:18 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan  8 21:18 /etc/kubernetes/scheduler.conf
	
	I0108 13:18:58.519408   16717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0108 13:18:58.529919   16717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0108 13:18:58.544100   16717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0108 13:18:58.555392   16717 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:18:58.555485   16717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0108 13:18:58.617706   16717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0108 13:18:58.626897   16717 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:18:58.626975   16717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0108 13:18:58.635231   16717 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 13:18:58.643894   16717 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 13:18:58.643913   16717 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 13:18:58.699263   16717 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 13:18:59.144349   16717 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 13:18:59.302903   16717 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 13:18:59.370298   16717 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 13:18:59.520969   16717 api_server.go:51] waiting for apiserver process to appear ...
	I0108 13:18:59.521047   16717 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:19:00.038976   16717 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:19:00.539004   16717 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:19:00.550974   16717 api_server.go:71] duration metric: took 1.030002525s to wait for apiserver process to appear ...
	I0108 13:19:00.550994   16717 api_server.go:87] waiting for apiserver healthz status ...
	I0108 13:19:00.551003   16717 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52786/healthz ...
	I0108 13:19:04.073407   16717 api_server.go:278] https://127.0.0.1:52786/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 13:19:04.073429   16717 api_server.go:102] status: https://127.0.0.1:52786/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 13:19:04.573795   16717 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52786/healthz ...
	I0108 13:19:04.581574   16717 api_server.go:278] https://127.0.0.1:52786/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 13:19:04.581596   16717 api_server.go:102] status: https://127.0.0.1:52786/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 13:19:05.073553   16717 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52786/healthz ...
	I0108 13:19:05.080159   16717 api_server.go:278] https://127.0.0.1:52786/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 13:19:05.080174   16717 api_server.go:102] status: https://127.0.0.1:52786/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 13:19:05.574134   16717 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52786/healthz ...
	I0108 13:19:05.582822   16717 api_server.go:278] https://127.0.0.1:52786/healthz returned 200:
	ok
	I0108 13:19:05.588897   16717 api_server.go:140] control plane version: v1.25.3
	I0108 13:19:05.588910   16717 api_server.go:130] duration metric: took 5.037887045s to wait for apiserver health ...
	I0108 13:19:05.588920   16717 cni.go:95] Creating CNI manager for ""
	I0108 13:19:05.588925   16717 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0108 13:19:05.588931   16717 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 13:19:05.595080   16717 system_pods.go:59] 5 kube-system pods found
	I0108 13:19:05.595150   16717 system_pods.go:61] "etcd-kubernetes-upgrade-130931" [6cb6bbc5-6315-4c4b-b3dd-69d47b773b46] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 13:19:05.595170   16717 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-130931" [36f9474d-f299-418f-ae5c-2f63c9e10675] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 13:19:05.595190   16717 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-130931" [7b5081e3-a142-438c-a3bd-20691919358d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 13:19:05.595203   16717 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-130931" [46210176-3f8f-49f3-9172-09ac9a1c32ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 13:19:05.595213   16717 system_pods.go:61] "storage-provisioner" [f4883686-825c-494b-bf5f-69baf3efa5bc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 13:19:05.595219   16717 system_pods.go:74] duration metric: took 6.282989ms to wait for pod list to return data ...
	I0108 13:19:05.595225   16717 node_conditions.go:102] verifying NodePressure condition ...
	I0108 13:19:05.599097   16717 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0108 13:19:05.599111   16717 node_conditions.go:123] node cpu capacity is 6
	I0108 13:19:05.599121   16717 node_conditions.go:105] duration metric: took 3.891467ms to run NodePressure ...
	I0108 13:19:05.599133   16717 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 13:19:05.736260   16717 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 13:19:05.744824   16717 ops.go:34] apiserver oom_adj: -16
	I0108 13:19:05.744837   16717 kubeadm.go:631] restartCluster took 28.4373708s
	I0108 13:19:05.744848   16717 kubeadm.go:398] StartCluster complete in 28.476016049s
	I0108 13:19:05.744865   16717 settings.go:142] acquiring lock: {Name:mkc40aeb9f069e96cc5c51255984662f0292a058 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:19:05.744984   16717 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15565-2761/kubeconfig
	I0108 13:19:05.745686   16717 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/kubeconfig: {Name:mk71550ab701dee908d8134473648649a6392238 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:19:05.746482   16717 kapi.go:59] client config for kubernetes-upgrade-130931: &rest.Config{Host:"https://127.0.0.1:52786", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 13:19:05.749095   16717 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kubernetes-upgrade-130931" rescaled to 1
	I0108 13:19:05.749149   16717 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0108 13:19:05.749153   16717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 13:19:05.749177   16717 addons.go:486] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
	I0108 13:19:05.770806   16717 out.go:177] * Verifying Kubernetes components...
	I0108 13:19:05.749390   16717 config.go:180] Loaded profile config "kubernetes-upgrade-130931": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 13:19:05.770876   16717 addons.go:65] Setting storage-provisioner=true in profile "kubernetes-upgrade-130931"
	I0108 13:19:05.770879   16717 addons.go:65] Setting default-storageclass=true in profile "kubernetes-upgrade-130931"
	I0108 13:19:05.812609   16717 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-130931"
	I0108 13:19:05.812608   16717 addons.go:227] Setting addon storage-provisioner=true in "kubernetes-upgrade-130931"
	W0108 13:19:05.812633   16717 addons.go:236] addon storage-provisioner should already be in state true
	I0108 13:19:05.812693   16717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 13:19:05.812705   16717 host.go:66] Checking if "kubernetes-upgrade-130931" exists ...
	I0108 13:19:05.813196   16717 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-130931 --format={{.State.Status}}
	I0108 13:19:05.813342   16717 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-130931 --format={{.State.Status}}
	I0108 13:19:05.895866   16717 kapi.go:59] client config for kubernetes-upgrade-130931: &rest.Config{Host:"https://127.0.0.1:52786", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubernetes-upgrade-130931/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 13:19:05.908850   16717 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 13:19:05.919634   16717 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 13:19:05.919655   16717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 13:19:05.919787   16717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-130931
	I0108 13:19:05.928293   16717 addons.go:227] Setting addon default-storageclass=true in "kubernetes-upgrade-130931"
	W0108 13:19:05.928313   16717 addons.go:236] addon default-storageclass should already be in state true
	I0108 13:19:05.928332   16717 host.go:66] Checking if "kubernetes-upgrade-130931" exists ...
	I0108 13:19:05.928765   16717 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0108 13:19:05.928816   16717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-130931
	I0108 13:19:05.928881   16717 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-130931 --format={{.State.Status}}
	I0108 13:19:06.003458   16717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52782 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/kubernetes-upgrade-130931/id_rsa Username:docker}
	I0108 13:19:06.010847   16717 api_server.go:51] waiting for apiserver process to appear ...
	I0108 13:19:06.010939   16717 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:19:06.012414   16717 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 13:19:06.012429   16717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 13:19:06.012525   16717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-130931
	I0108 13:19:06.025962   16717 api_server.go:71] duration metric: took 276.774329ms to wait for apiserver process to appear ...
	I0108 13:19:06.026003   16717 api_server.go:87] waiting for apiserver healthz status ...
	I0108 13:19:06.026030   16717 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52786/healthz ...
	I0108 13:19:06.033571   16717 api_server.go:278] https://127.0.0.1:52786/healthz returned 200:
	ok
	I0108 13:19:06.035089   16717 api_server.go:140] control plane version: v1.25.3
	I0108 13:19:06.035100   16717 api_server.go:130] duration metric: took 9.089856ms to wait for apiserver health ...
	I0108 13:19:06.035108   16717 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 13:19:06.041353   16717 system_pods.go:59] 5 kube-system pods found
	I0108 13:19:06.041374   16717 system_pods.go:61] "etcd-kubernetes-upgrade-130931" [6cb6bbc5-6315-4c4b-b3dd-69d47b773b46] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 13:19:06.041385   16717 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-130931" [36f9474d-f299-418f-ae5c-2f63c9e10675] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 13:19:06.041403   16717 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-130931" [7b5081e3-a142-438c-a3bd-20691919358d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 13:19:06.041412   16717 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-130931" [46210176-3f8f-49f3-9172-09ac9a1c32ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 13:19:06.041419   16717 system_pods.go:61] "storage-provisioner" [f4883686-825c-494b-bf5f-69baf3efa5bc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0108 13:19:06.041428   16717 system_pods.go:74] duration metric: took 6.312599ms to wait for pod list to return data ...
	I0108 13:19:06.041443   16717 kubeadm.go:573] duration metric: took 292.269771ms to wait for : map[apiserver:true system_pods:true] ...
	I0108 13:19:06.041452   16717 node_conditions.go:102] verifying NodePressure condition ...
	I0108 13:19:06.045201   16717 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0108 13:19:06.045217   16717 node_conditions.go:123] node cpu capacity is 6
	I0108 13:19:06.045227   16717 node_conditions.go:105] duration metric: took 3.772223ms to run NodePressure ...
	I0108 13:19:06.045235   16717 start.go:217] waiting for startup goroutines ...
	I0108 13:19:06.085271   16717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52782 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/kubernetes-upgrade-130931/id_rsa Username:docker}
	I0108 13:19:06.114225   16717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 13:19:06.231047   16717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 13:19:06.878911   16717 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0108 13:19:06.919689   16717 addons.go:488] enableAddons completed in 1.170483061s
	I0108 13:19:06.920109   16717 ssh_runner.go:195] Run: rm -f paused
	I0108 13:19:06.964686   16717 start.go:536] kubectl: 1.25.2, cluster: 1.25.3 (minor skew: 0)
	I0108 13:19:07.007526   16717 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-130931" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Sun 2023-01-08 21:13:46 UTC, end at Sun 2023-01-08 21:19:08 UTC. --
	Jan 08 21:18:35 kubernetes-upgrade-130931 dockerd[12051]: time="2023-01-08T21:18:35.418063972Z" level=info msg="ignoring event" container=f9570a96d4e5980aa9d026c2cf155fe3b1ead95ae656b01383246d38c2ce37ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 21:18:35 kubernetes-upgrade-130931 dockerd[12051]: time="2023-01-08T21:18:35.419385940Z" level=info msg="ignoring event" container=c59d374b96dc702f264b702da00ddcf9868bf0c6e550ad62bcd190b826265c4e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 21:18:35 kubernetes-upgrade-130931 dockerd[12051]: time="2023-01-08T21:18:35.422725683Z" level=info msg="ignoring event" container=c19c01cfeb909cf557235a76fbfb0bc9864840cc6ef9edf8f9f6f1bb7edc6ca8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 21:18:35 kubernetes-upgrade-130931 dockerd[12051]: time="2023-01-08T21:18:35.424045238Z" level=info msg="ignoring event" container=695f66b32d40f0edb1896dbba446f39342b3146e9fb5962af5f47d47f4662b58 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 21:18:35 kubernetes-upgrade-130931 dockerd[12051]: time="2023-01-08T21:18:35.599675393Z" level=info msg="Removing stale sandbox 02f237a50b248af40856a2781e7dacd896cbdeb12b5dd0b86500b779a35b615d (c59d374b96dc702f264b702da00ddcf9868bf0c6e550ad62bcd190b826265c4e)"
	Jan 08 21:18:35 kubernetes-upgrade-130931 dockerd[12051]: time="2023-01-08T21:18:35.601945880Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint da63ced65221cf247a23fd2792f0ce9b8a509d2ec1147bfb304fe5f364bd197b e51a048891f41fa6358305ff49b3ec4b2b4928450b60b525d192e1f4a10057a2], retrying...."
	Jan 08 21:18:35 kubernetes-upgrade-130931 dockerd[12051]: time="2023-01-08T21:18:35.686991944Z" level=info msg="Removing stale sandbox 325fd5a7bb71e67d504b0ad82f3980309b53c07c4b2f1484a14fa141a0d530c0 (ddfbed491ee36006ec00b7fb5478c80bc1a892d732c510f4b56e631518470747)"
	Jan 08 21:18:35 kubernetes-upgrade-130931 dockerd[12051]: time="2023-01-08T21:18:35.688381482Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint da63ced65221cf247a23fd2792f0ce9b8a509d2ec1147bfb304fe5f364bd197b 14a2dc57e559975c42e7495d1f6d762ee27b8205ef03509b568c9ccc2cc4a856], retrying...."
	Jan 08 21:18:35 kubernetes-upgrade-130931 dockerd[12051]: time="2023-01-08T21:18:35.768777349Z" level=info msg="Removing stale sandbox 98d0eb038e1cfedfda9b9b1ee152652878cdc509ec7c14735e6f7c1e16be0d27 (f9570a96d4e5980aa9d026c2cf155fe3b1ead95ae656b01383246d38c2ce37ee)"
	Jan 08 21:18:35 kubernetes-upgrade-130931 dockerd[12051]: time="2023-01-08T21:18:35.770014486Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint da63ced65221cf247a23fd2792f0ce9b8a509d2ec1147bfb304fe5f364bd197b 0f73bbc4850b7f2535356c3fcc65347991fbbbedaaa18aeb3e44200c0f871d12], retrying...."
	Jan 08 21:18:35 kubernetes-upgrade-130931 dockerd[12051]: time="2023-01-08T21:18:35.794534700Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 08 21:18:35 kubernetes-upgrade-130931 dockerd[12051]: time="2023-01-08T21:18:35.829810077Z" level=info msg="Loading containers: done."
	Jan 08 21:18:35 kubernetes-upgrade-130931 dockerd[12051]: time="2023-01-08T21:18:35.838827268Z" level=info msg="Docker daemon" commit=3056208 graphdriver(s)=overlay2 version=20.10.21
	Jan 08 21:18:35 kubernetes-upgrade-130931 dockerd[12051]: time="2023-01-08T21:18:35.838896377Z" level=info msg="Daemon has completed initialization"
	Jan 08 21:18:35 kubernetes-upgrade-130931 systemd[1]: Started Docker Application Container Engine.
	Jan 08 21:18:35 kubernetes-upgrade-130931 dockerd[12051]: time="2023-01-08T21:18:35.860745260Z" level=info msg="API listen on [::]:2376"
	Jan 08 21:18:35 kubernetes-upgrade-130931 dockerd[12051]: time="2023-01-08T21:18:35.866401836Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 08 21:18:57 kubernetes-upgrade-130931 dockerd[12051]: time="2023-01-08T21:18:57.221205491Z" level=info msg="ignoring event" container=c91ed258ea56703f3fa19413c3ebde67c62e6f7dcfe47849f96768f1323330ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 21:18:57 kubernetes-upgrade-130931 dockerd[12051]: time="2023-01-08T21:18:57.225614674Z" level=info msg="ignoring event" container=7c41f535b9c1ce0adfb49a4263a43cdf46a45c518be37288f4caa7e463621d0c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 21:18:57 kubernetes-upgrade-130931 dockerd[12051]: time="2023-01-08T21:18:57.233024456Z" level=info msg="ignoring event" container=761bc79c537c50e64ee0f224722c8f3e29d10b3cc080532a8092655c4143cde1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 21:18:57 kubernetes-upgrade-130931 dockerd[12051]: time="2023-01-08T21:18:57.233068500Z" level=info msg="ignoring event" container=a423e71041990317eb610d4f3194d036021d70fc88dc6169669befbce04d24f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 21:18:57 kubernetes-upgrade-130931 dockerd[12051]: time="2023-01-08T21:18:57.235823872Z" level=info msg="ignoring event" container=25e409f1070add0e63e1a99ecd478c2e4699e0685e09bdc5173a7471d5c59753 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 21:18:57 kubernetes-upgrade-130931 dockerd[12051]: time="2023-01-08T21:18:57.242778534Z" level=info msg="ignoring event" container=e87c699e5332b5711af2560e857dbd4661514e5994d252b9e5c550800c03c95c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 21:18:57 kubernetes-upgrade-130931 dockerd[12051]: time="2023-01-08T21:18:57.247951873Z" level=info msg="ignoring event" container=fd72ec15e7514651c01afc760734264a20168f1cc7476e568eecea68b6a85101 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 08 21:18:58 kubernetes-upgrade-130931 dockerd[12051]: time="2023-01-08T21:18:58.343239925Z" level=info msg="ignoring event" container=a7cb262098d63c166ff60df91ccc8d713ed6dada37266bfb1ce3ad06e2e229a0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	60ccdc035a674       a8a176a5d5d69       8 seconds ago       Running             etcd                      3                   165446a394b73
	4c3af1a563dc8       6d23ec0e8b87e       8 seconds ago       Running             kube-scheduler            3                   dc5cdee38a86a
	e5821bd28296c       6039992312758       8 seconds ago       Running             kube-controller-manager   3                   41cd933cc6e2c
	0e11961424d12       0346dbd74bcb9       9 seconds ago       Running             kube-apiserver            2                   4e6181da796df
	c91ed258ea567       6d23ec0e8b87e       14 seconds ago      Exited              kube-scheduler            2                   fd72ec15e7514
	761bc79c537c5       a8a176a5d5d69       17 seconds ago      Exited              etcd                      2                   a423e71041990
	25e409f1070ad       6039992312758       19 seconds ago      Exited              kube-controller-manager   2                   e87c699e5332b
	a7cb262098d63       0346dbd74bcb9       32 seconds ago      Exited              kube-apiserver            1                   7c41f535b9c1c
	
	* 
	* ==> describe nodes <==
	* Name:               kubernetes-upgrade-130931
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-130931
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286
	                    minikube.k8s.io/name=kubernetes-upgrade-130931
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_08T13_18_18_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 08 Jan 2023 21:18:15 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-130931
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 08 Jan 2023 21:19:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 08 Jan 2023 21:19:04 +0000   Sun, 08 Jan 2023 21:18:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 08 Jan 2023 21:19:04 +0000   Sun, 08 Jan 2023 21:18:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 08 Jan 2023 21:19:04 +0000   Sun, 08 Jan 2023 21:18:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 08 Jan 2023 21:19:04 +0000   Sun, 08 Jan 2023 21:18:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    kubernetes-upgrade-130931
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc065f8e2d1f42529ccfe18f8b887c8c
	  System UUID:                dc065f8e2d1f42529ccfe18f8b887c8c
	  Boot ID:                    77459c6d-45b1-4c6b-b47b-e80c0f7ff94f
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.21
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-130931                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         51s
	  kube-system                 kube-apiserver-kubernetes-upgrade-130931             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-130931    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kube-system                 kube-scheduler-kubernetes-upgrade-130931             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 58s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x5 over 58s)  kubelet  Node kubernetes-upgrade-130931 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x3 over 58s)  kubelet  Node kubernetes-upgrade-130931 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x3 over 58s)  kubelet  Node kubernetes-upgrade-130931 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  58s                kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 51s                kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  51s                kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  51s                kubelet  Node kubernetes-upgrade-130931 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    51s                kubelet  Node kubernetes-upgrade-130931 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     51s                kubelet  Node kubernetes-upgrade-130931 status is now: NodeHasSufficientPID
	  Normal  NodeReady                50s                kubelet  Node kubernetes-upgrade-130931 status is now: NodeReady
	  Normal  Starting                 10s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  10s (x8 over 10s)  kubelet  Node kubernetes-upgrade-130931 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10s (x8 over 10s)  kubelet  Node kubernetes-upgrade-130931 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10s (x7 over 10s)  kubelet  Node kubernetes-upgrade-130931 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10s                kubelet  Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [Jan 8 20:35] FS-Cache: O-cookie d=00000000c8eceefa{9p.inode} n=00000000f7b4829b
	[  +0.000182] FS-Cache: O-key=[8] 'a1cf8c0500000000'
	[  +0.000077] FS-Cache: N-cookie c=0000000e [p=00000005 fl=2 nc=0 na=1]
	[  +0.000070] FS-Cache: N-cookie d=00000000c8eceefa{9p.inode} n=00000000bb70f1a6
	[  +0.000102] FS-Cache: N-key=[8] 'a1cf8c0500000000'
	[  +3.200941] FS-Cache: Duplicate cookie detected
	[  +0.000038] FS-Cache: O-cookie c=00000008 [p=00000005 fl=226 nc=0 na=1]
	[  +0.000038] FS-Cache: O-cookie d=00000000c8eceefa{9p.inode} n=0000000058ef753f
	[  +0.000051] FS-Cache: O-key=[8] 'a0cf8c0500000000'
	[  +0.000036] FS-Cache: N-cookie c=00000011 [p=00000005 fl=2 nc=0 na=1]
	[  +0.000045] FS-Cache: N-cookie d=00000000c8eceefa{9p.inode} n=00000000bb70f1a6
	[  +0.000055] FS-Cache: N-key=[8] 'a0cf8c0500000000'
	[  +0.662289] FS-Cache: Duplicate cookie detected
	[  +0.000034] FS-Cache: O-cookie c=0000000b [p=00000005 fl=226 nc=0 na=1]
	[  +0.000052] FS-Cache: O-cookie d=00000000c8eceefa{9p.inode} n=000000007d517ac1
	[  +0.000173] FS-Cache: O-key=[8] 'becf8c0500000000'
	[  +0.000039] FS-Cache: N-cookie c=00000012 [p=00000005 fl=2 nc=0 na=1]
	[  +0.000044] FS-Cache: N-cookie d=00000000c8eceefa{9p.inode} n=00000000c94e52af
	[  +0.000088] FS-Cache: N-key=[8] 'becf8c0500000000'
	
	* 
	* ==> etcd [60ccdc035a67] <==
	* {"level":"info","ts":"2023-01-08T21:19:00.935Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2023-01-08T21:19:00.935Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2023-01-08T21:19:00.935Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:19:00.935Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:19:00.937Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-01-08T21:19:00.937Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-01-08T21:19:00.937Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-01-08T21:19:00.937Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-01-08T21:19:00.937Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-08T21:19:02.328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 4"}
	{"level":"info","ts":"2023-01-08T21:19:02.328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 4"}
	{"level":"info","ts":"2023-01-08T21:19:02.328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2023-01-08T21:19:02.328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 5"}
	{"level":"info","ts":"2023-01-08T21:19:02.328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 5"}
	{"level":"info","ts":"2023-01-08T21:19:02.328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 5"}
	{"level":"info","ts":"2023-01-08T21:19:02.328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 5"}
	{"level":"info","ts":"2023-01-08T21:19:02.330Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-130931 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-08T21:19:02.330Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-08T21:19:02.330Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-08T21:19:02.330Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-08T21:19:02.330Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-08T21:19:02.331Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-08T21:19:02.333Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"warn","ts":"2023-01-08T21:19:05.920Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"106.290292ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:756"}
	{"level":"info","ts":"2023-01-08T21:19:05.920Z","caller":"traceutil/trace.go:171","msg":"trace[874119030] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:342; }","duration":"106.521266ms","start":"2023-01-08T21:19:05.813Z","end":"2023-01-08T21:19:05.920Z","steps":["trace[874119030] 'range keys from in-memory index tree'  (duration: 104.593475ms)"],"step_count":1}
	
	* 
	* ==> etcd [761bc79c537c] <==
	* {"level":"info","ts":"2023-01-08T21:18:51.890Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2023-01-08T21:18:51.890Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:18:51.890Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-08T21:18:52.884Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 3"}
	{"level":"info","ts":"2023-01-08T21:18:52.885Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-01-08T21:18:52.885Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-01-08T21:18:52.885Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 4"}
	{"level":"info","ts":"2023-01-08T21:18:52.885Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2023-01-08T21:18:52.885Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 4"}
	{"level":"info","ts":"2023-01-08T21:18:52.885Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2023-01-08T21:18:52.888Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-130931 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-08T21:18:52.888Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-08T21:18:52.888Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-08T21:18:52.889Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-08T21:18:52.889Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-08T21:18:52.890Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-08T21:18:52.890Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-01-08T21:18:57.140Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-01-08T21:18:57.140Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"kubernetes-upgrade-130931","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	WARNING: 2023/01/08 21:18:57 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2023/01/08 21:18:57 [core] grpc: addrConn.createTransport failed to connect to {192.168.76.2:2379 192.168.76.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.76.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2023-01-08T21:18:57.149Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2023-01-08T21:18:57.151Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-01-08T21:18:57.152Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-01-08T21:18:57.152Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"kubernetes-upgrade-130931","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	* 
	* ==> kernel <==
	*  21:19:09 up  1:18,  0 users,  load average: 3.75, 2.44, 1.74
	Linux kubernetes-upgrade-130931 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [0e11961424d1] <==
	* I0108 21:19:04.061605       1 controller.go:85] Starting OpenAPI V3 controller
	I0108 21:19:04.061648       1 naming_controller.go:291] Starting NamingConditionController
	I0108 21:19:04.061665       1 establishing_controller.go:76] Starting EstablishingController
	I0108 21:19:04.061676       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0108 21:19:04.061714       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0108 21:19:04.061723       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0108 21:19:04.065936       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0108 21:19:04.071091       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0108 21:19:04.072614       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0108 21:19:04.127128       1 controller.go:159] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0108 21:19:04.154330       1 cache.go:39] Caches are synced for autoregister controller
	I0108 21:19:04.214597       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 21:19:04.154512       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0108 21:19:04.154516       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0108 21:19:04.155430       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I0108 21:19:04.156626       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0108 21:19:04.214015       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0108 21:19:04.216204       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0108 21:19:04.842108       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0108 21:19:05.056539       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0108 21:19:05.682237       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0108 21:19:05.690854       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0108 21:19:05.709975       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0108 21:19:05.724324       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 21:19:05.728933       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-apiserver [a7cb262098d6] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0108 21:18:58.146202       1 logging.go:59] [core] [Channel #155 SubChannel #156] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0108 21:18:58.146264       1 logging.go:59] [core] [Channel #104 SubChannel #105] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0108 21:18:58.146301       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [25e409f1070a] <==
	* I0108 21:18:50.133987       1 serving.go:348] Generated self-signed cert in-memory
	I0108 21:18:50.413059       1 controllermanager.go:178] Version: v1.25.3
	I0108 21:18:50.413105       1 controllermanager.go:180] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:18:50.414184       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0108 21:18:50.414229       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0108 21:18:50.414469       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0108 21:18:50.414586       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	* 
	* ==> kube-controller-manager [e5821bd28296] <==
	* I0108 21:19:06.150761       1 controllermanager.go:603] Started "pv-protection"
	I0108 21:19:06.151009       1 pv_protection_controller.go:79] Starting PV protection controller
	I0108 21:19:06.151046       1 shared_informer.go:255] Waiting for caches to sync for PV protection
	I0108 21:19:06.154172       1 controllermanager.go:603] Started "endpointslice"
	I0108 21:19:06.154383       1 endpointslice_controller.go:261] Starting endpoint slice controller
	I0108 21:19:06.154394       1 shared_informer.go:255] Waiting for caches to sync for endpoint_slice
	I0108 21:19:06.157758       1 controllermanager.go:603] Started "replicationcontroller"
	W0108 21:19:06.157810       1 core.go:232] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes.
	W0108 21:19:06.157816       1 controllermanager.go:581] Skipping "route"
	I0108 21:19:06.157920       1 replica_set.go:205] Starting replicationcontroller controller
	I0108 21:19:06.157986       1 shared_informer.go:255] Waiting for caches to sync for ReplicationController
	I0108 21:19:06.167554       1 expand_controller.go:340] Starting expand controller
	I0108 21:19:06.167626       1 shared_informer.go:255] Waiting for caches to sync for expand
	I0108 21:19:06.167682       1 controllermanager.go:603] Started "persistentvolume-expander"
	I0108 21:19:06.170584       1 controllermanager.go:603] Started "pvc-protection"
	I0108 21:19:06.170660       1 pvc_protection_controller.go:103] "Starting PVC protection controller"
	I0108 21:19:06.170670       1 shared_informer.go:255] Waiting for caches to sync for PVC protection
	I0108 21:19:06.172709       1 controllermanager.go:603] Started "daemonset"
	I0108 21:19:06.173023       1 daemon_controller.go:291] Starting daemon sets controller
	I0108 21:19:06.173125       1 shared_informer.go:255] Waiting for caches to sync for daemon sets
	I0108 21:19:06.176830       1 controllermanager.go:603] Started "job"
	I0108 21:19:06.177019       1 job_controller.go:196] Starting job controller
	I0108 21:19:06.177026       1 shared_informer.go:255] Waiting for caches to sync for job
	I0108 21:19:06.213973       1 shared_informer.go:262] Caches are synced for tokens
	I0108 21:19:06.214223       1 node_ipam_controller.go:91] Sending events to api server.
	
	* 
	* ==> kube-scheduler [4c3af1a563dc] <==
	* W0108 21:19:04.130291       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 21:19:04.130439       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 21:19:04.130458       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 21:19:04.130928       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 21:19:04.131238       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 21:19:04.130467       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0108 21:19:04.130687       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 21:19:04.131409       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0108 21:19:04.130836       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:19:04.131536       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 21:19:04.135487       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 21:19:04.135577       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0108 21:19:04.135703       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 21:19:04.135857       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0108 21:19:04.144790       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E0108 21:19:04.145119       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	W0108 21:19:04.145755       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E0108 21:19:04.146061       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	W0108 21:19:04.145786       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E0108 21:19:04.147046       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	W0108 21:19:04.145857       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0108 21:19:04.147334       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0108 21:19:04.145902       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0108 21:19:04.147609       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	I0108 21:19:05.429124       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [c91ed258ea56] <==
	* E0108 21:18:55.632677       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	W0108 21:18:55.632787       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0108 21:18:55.632860       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0108 21:18:55.633208       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E0108 21:18:55.633224       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	W0108 21:18:55.633361       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0108 21:18:55.633373       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W0108 21:18:55.633423       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0108 21:18:55.633434       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W0108 21:18:55.633571       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0108 21:18:55.633584       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0108 21:18:55.633663       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0108 21:18:55.633673       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W0108 21:18:55.633725       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0108 21:18:55.633789       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0108 21:18:55.634633       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0108 21:18:55.634652       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0108 21:18:55.634689       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E0108 21:18:55.634699       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	W0108 21:18:55.635825       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0108 21:18:55.635849       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	I0108 21:18:55.723771       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0108 21:18:57.148347       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0108 21:18:57.148413       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0108 21:18:57.148482       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sun 2023-01-08 21:13:46 UTC, end at Sun 2023-01-08 21:19:10 UTC. --
	Jan 08 21:19:02 kubernetes-upgrade-130931 kubelet[13616]: E0108 21:19:02.071347   13616 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-130931\" not found"
	Jan 08 21:19:02 kubernetes-upgrade-130931 kubelet[13616]: E0108 21:19:02.171468   13616 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-130931\" not found"
	Jan 08 21:19:02 kubernetes-upgrade-130931 kubelet[13616]: E0108 21:19:02.271685   13616 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-130931\" not found"
	Jan 08 21:19:02 kubernetes-upgrade-130931 kubelet[13616]: E0108 21:19:02.372440   13616 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-130931\" not found"
	Jan 08 21:19:02 kubernetes-upgrade-130931 kubelet[13616]: E0108 21:19:02.473027   13616 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-130931\" not found"
	Jan 08 21:19:02 kubernetes-upgrade-130931 kubelet[13616]: E0108 21:19:02.573907   13616 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-130931\" not found"
	Jan 08 21:19:02 kubernetes-upgrade-130931 kubelet[13616]: E0108 21:19:02.674592   13616 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-130931\" not found"
	Jan 08 21:19:02 kubernetes-upgrade-130931 kubelet[13616]: E0108 21:19:02.775198   13616 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-130931\" not found"
	Jan 08 21:19:02 kubernetes-upgrade-130931 kubelet[13616]: E0108 21:19:02.876113   13616 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-130931\" not found"
	Jan 08 21:19:02 kubernetes-upgrade-130931 kubelet[13616]: E0108 21:19:02.976275   13616 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-130931\" not found"
	Jan 08 21:19:03 kubernetes-upgrade-130931 kubelet[13616]: E0108 21:19:03.077318   13616 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-130931\" not found"
	Jan 08 21:19:03 kubernetes-upgrade-130931 kubelet[13616]: E0108 21:19:03.178420   13616 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-130931\" not found"
	Jan 08 21:19:03 kubernetes-upgrade-130931 kubelet[13616]: E0108 21:19:03.279683   13616 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-130931\" not found"
	Jan 08 21:19:03 kubernetes-upgrade-130931 kubelet[13616]: E0108 21:19:03.380467   13616 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-130931\" not found"
	Jan 08 21:19:03 kubernetes-upgrade-130931 kubelet[13616]: E0108 21:19:03.481375   13616 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-130931\" not found"
	Jan 08 21:19:03 kubernetes-upgrade-130931 kubelet[13616]: E0108 21:19:03.582075   13616 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-130931\" not found"
	Jan 08 21:19:03 kubernetes-upgrade-130931 kubelet[13616]: E0108 21:19:03.683161   13616 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-130931\" not found"
	Jan 08 21:19:03 kubernetes-upgrade-130931 kubelet[13616]: E0108 21:19:03.783338   13616 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-130931\" not found"
	Jan 08 21:19:03 kubernetes-upgrade-130931 kubelet[13616]: E0108 21:19:03.884143   13616 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-130931\" not found"
	Jan 08 21:19:03 kubernetes-upgrade-130931 kubelet[13616]: E0108 21:19:03.984529   13616 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-130931\" not found"
	Jan 08 21:19:04 kubernetes-upgrade-130931 kubelet[13616]: E0108 21:19:04.084916   13616 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-130931\" not found"
	Jan 08 21:19:04 kubernetes-upgrade-130931 kubelet[13616]: I0108 21:19:04.223436   13616 kubelet_node_status.go:108] "Node was previously registered" node="kubernetes-upgrade-130931"
	Jan 08 21:19:04 kubernetes-upgrade-130931 kubelet[13616]: I0108 21:19:04.223535   13616 kubelet_node_status.go:73] "Successfully registered node" node="kubernetes-upgrade-130931"
	Jan 08 21:19:04 kubernetes-upgrade-130931 kubelet[13616]: I0108 21:19:04.435554   13616 apiserver.go:52] "Watching apiserver"
	Jan 08 21:19:04 kubernetes-upgrade-130931 kubelet[13616]: I0108 21:19:04.515759   13616 reconciler.go:169] "Reconciler: start to sync state"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-130931 -n kubernetes-upgrade-130931
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-130931 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: storage-provisioner
helpers_test.go:272: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context kubernetes-upgrade-130931 describe pod storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-130931 describe pod storage-provisioner: exit status 1 (53.677182ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context kubernetes-upgrade-130931 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-130931" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-130931
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-130931: (2.778682199s)
--- FAIL: TestKubernetesUpgrade (582.12s)

                                                
                                    
x
+
TestMissingContainerUpgrade (59.1s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.1698328454.exe start -p missing-upgrade-130832 --memory=2200 --driver=docker 
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.1698328454.exe start -p missing-upgrade-130832 --memory=2200 --driver=docker : exit status 78 (44.436912452s)

                                                
                                                
-- stdout --
	* [missing-upgrade-130832] minikube v1.9.1 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2761/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-130832
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* Deleting "missing-upgrade-130832" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 181.52 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 1.87 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 13.30 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 27.14 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 40.97 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 54.83 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 68.69 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 82.25 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 96.14 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 110.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 123.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 137.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 151.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 165.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 179.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 192.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 206.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 220.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 234.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 247.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 261.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 275.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 289.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 302.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 314.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 326.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 335.51 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 345.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 358.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 370.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 384.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 397.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 411.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 425.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 439.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 453.38 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 467.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 480.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 494.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 508.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 522.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 536.27 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-08 21:08:57.051352676 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* [DOCKER_RESTART_FAILED] Failed to start docker container. "minikube start -p missing-upgrade-130832" may fix it. creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-08 21:09:16.455018059 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Suggestion: Remove the incompatible --docker-opt flag if one was provided
	* Related issue: https://github.com/kubernetes/minikube/issues/7070

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.1698328454.exe start -p missing-upgrade-130832 --memory=2200 --driver=docker 
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.1698328454.exe start -p missing-upgrade-130832 --memory=2200 --driver=docker : exit status 70 (3.943644712s)

                                                
                                                
-- stdout --
	* [missing-upgrade-130832] minikube v1.9.1 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2761/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-130832
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-130832" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.1698328454.exe start -p missing-upgrade-130832 --memory=2200 --driver=docker 
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.1698328454.exe start -p missing-upgrade-130832 --memory=2200 --driver=docker : exit status 70 (4.141509383s)

                                                
                                                
-- stdout --
	* [missing-upgrade-130832] minikube v1.9.1 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2761/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-130832
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-130832" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:322: release start failed: exit status 70
panic.go:522: *** TestMissingContainerUpgrade FAILED at 2023-01-08 13:09:29.0036 -0800 PST m=+2565.971270874
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-130832
helpers_test.go:235: (dbg) docker inspect missing-upgrade-130832:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ae8d945d3ff1bc076c4a73993ba04a9ab35e46d5b62dbe924655d8aca8c94dcb",
	        "Created": "2023-01-08T21:09:05.311147394Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 156238,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-08T21:09:05.536714612Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/ae8d945d3ff1bc076c4a73993ba04a9ab35e46d5b62dbe924655d8aca8c94dcb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ae8d945d3ff1bc076c4a73993ba04a9ab35e46d5b62dbe924655d8aca8c94dcb/hostname",
	        "HostsPath": "/var/lib/docker/containers/ae8d945d3ff1bc076c4a73993ba04a9ab35e46d5b62dbe924655d8aca8c94dcb/hosts",
	        "LogPath": "/var/lib/docker/containers/ae8d945d3ff1bc076c4a73993ba04a9ab35e46d5b62dbe924655d8aca8c94dcb/ae8d945d3ff1bc076c4a73993ba04a9ab35e46d5b62dbe924655d8aca8c94dcb-json.log",
	        "Name": "/missing-upgrade-130832",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-130832:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/38129eecdbcf6d60f01ab9ec78f3feabef84e6df0922c4ce4f5870a62aa53342-init/diff:/var/lib/docker/overlay2/4339d0aef19b9e82156ed6afc0a47cc902fc7e9bf83087995128f2a07d2fd454/diff:/var/lib/docker/overlay2/d303941d115ffe958237f9f06597edd68b611d67f9a6d7a68b49f940b9a677e3/diff:/var/lib/docker/overlay2/6cbdf392e08105ea38ca83eca9e4da63a60e0073e49cf651f74cbdd31cae6dfc/diff:/var/lib/docker/overlay2/eb032dc3deff7e35843c9c958de7b67a4f949d2eb7550b30a6c383a28df69f68/diff:/var/lib/docker/overlay2/4729fa7b65cffb7556a1a432696949070f56a3e1709e942535e444ace41b7666/diff:/var/lib/docker/overlay2/8c50910932494f597346d37455e3f630b229a8b95381110da09c900f680e486d/diff:/var/lib/docker/overlay2/3fc62bffebce434327f6be9d4d68b030866e9b1b64f54ebd2dae7556275d7987/diff:/var/lib/docker/overlay2/791589ce01828c9fc12cd784310077fb88a0444738f266d4670d719d06e2b35d/diff:/var/lib/docker/overlay2/bdd8a36c4ab4740f2397cc074ad49bcafe8f3eb5907ee1acf9e79810e97ba44c/diff:/var/lib/docker/overlay2/4f0a94
f7f31b44d6b938b58ade4036241092f4f0cb39866054e5b845d514ae56/diff:/var/lib/docker/overlay2/d03fa159dc87ca20f9df79269ff41bcc822210e05df03d7f03daf8db97547f84/diff:/var/lib/docker/overlay2/ffb7dbfd87953e32509c9d88b2eed2f9e11e3c0c54346fcd320d63a9ae146adf/diff:/var/lib/docker/overlay2/9437b6153164db7345df3671c23cca8139f04180c381bfc8e5410593b1040b6d/diff:/var/lib/docker/overlay2/79c6ca63b86d57f8e869dd786d4708901808e8e2c6fc7032ccec4243014477d7/diff:/var/lib/docker/overlay2/61c78013698167262d184b0a246b42f98492bd17e1a447d5e678e78876f4bb32/diff:/var/lib/docker/overlay2/15afe3cbc4db00efef19ecae369bc70e33665459c64a90e981ecf683006d4000/diff:/var/lib/docker/overlay2/0ecfd946d3c53fda8be276543dc6b5d9558fb7090ce8d595afcdbd40da41e8ad/diff:/var/lib/docker/overlay2/c8632b1729b92fe4889110620fe2c174cffd28959a3c399ffe39d4ea83603eb2/diff:/var/lib/docker/overlay2/d6ec0093d0f478c677a422019670b6b0e2a56d7003fce172ff797cdd0949ee29/diff:/var/lib/docker/overlay2/752e36fa2214ba6ea532ce2d18b5a7018dcd32353755dce50b86190321d637ea/diff:/var/lib/d
ocker/overlay2/1fad0941cf22dc559a597fd62099a367ac653d6df5a7fc49cba958386e9bc883/diff",
	                "MergedDir": "/var/lib/docker/overlay2/38129eecdbcf6d60f01ab9ec78f3feabef84e6df0922c4ce4f5870a62aa53342/merged",
	                "UpperDir": "/var/lib/docker/overlay2/38129eecdbcf6d60f01ab9ec78f3feabef84e6df0922c4ce4f5870a62aa53342/diff",
	                "WorkDir": "/var/lib/docker/overlay2/38129eecdbcf6d60f01ab9ec78f3feabef84e6df0922c4ce4f5870a62aa53342/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-130832",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-130832/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-130832",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-130832",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-130832",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0b8193b1c69a3be0741863b3606daf0ef20d4bc6760994abe29088726c56d102",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52436"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52437"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0b8193b1c69a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "20b59e60b69fd660ca2e032b2bb5ba751470677df655eaedafb8ef1c675d3111",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "605c9d610329c81415a9a3659d318d78a2c0d04fb9f7008971ba10ffbce0f25e",
	                    "EndpointID": "20b59e60b69fd660ca2e032b2bb5ba751470677df655eaedafb8ef1c675d3111",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-130832 -n missing-upgrade-130832
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-130832 -n missing-upgrade-130832: exit status 6 (389.855625ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 13:09:29.441485   14366 status.go:415] kubeconfig endpoint: extract IP: "missing-upgrade-130832" does not appear in /Users/jenkins/minikube-integration/15565-2761/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-130832" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-130832" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-130832
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-130832: (2.317656331s)
--- FAIL: TestMissingContainerUpgrade (59.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (56.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3442761751.exe start -p stopped-upgrade-131031 --memory=2200 --vm-driver=docker 
E0108 13:11:02.636284    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/skaffold-130345/client.crt: no such file or directory
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3442761751.exe start -p stopped-upgrade-131031 --memory=2200 --vm-driver=docker : exit status 70 (45.938339858s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-131031] minikube v1.9.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig3002203339
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-08 21:10:56.913667662 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "stopped-upgrade-131031" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-08 21:11:16.838011081 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p stopped-upgrade-131031", then "minikube start -p stopped-upgrade-131031 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 158.93 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 1.59 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 12.20 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 25.75 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 39.67 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 48.56 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 62.14 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 75.80 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 89.61 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 101.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 114.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 128.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 142.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 155.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 169.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 183.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 196.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 209.51 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 221.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 235.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 247.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 258.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 272.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 280.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 288.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 301.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 315.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 322.26 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 332.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 344.01 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 352.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 365.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 375.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 378.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 390.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 402.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 413.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 427.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 440.37 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 454.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 465.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 479.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 492.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 505.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 517.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 527.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 541.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-08 21:11:16.838011081 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3442761751.exe start -p stopped-upgrade-131031 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3442761751.exe start -p stopped-upgrade-131031 --memory=2200 --vm-driver=docker : exit status 70 (4.481725997s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-131031] minikube v1.9.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig665649414
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-131031" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3442761751.exe start -p stopped-upgrade-131031 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3442761751.exe start -p stopped-upgrade-131031 --memory=2200 --vm-driver=docker : exit status 70 (4.363972518s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-131031] minikube v1.9.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig3050046668
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-131031" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (56.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (54.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kubenet-130508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0108 13:20:23.945598    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/auto-130508/client.crt: no such file or directory
E0108 13:20:26.505921    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/auto-130508/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-130508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.130380174s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-130508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0108 13:20:31.626256    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/auto-130508/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-130508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.134387945s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-130508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-130508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.118953625s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0108 13:20:41.868573    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/auto-130508/client.crt: no such file or directory
net_test.go:238: (dbg) Run:  kubectl --context kubenet-130508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-130508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.120041245s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-130508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-130508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.13241763s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0108 13:21:02.349015    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/auto-130508/client.crt: no such file or directory
net_test.go:238: (dbg) Run:  kubectl --context kubenet-130508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-130508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.115368012s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-130508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-130508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.141783549s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:243: failed to connect via pod host: exit status 1
--- FAIL: TestNetworkPlugins/group/kubenet/HairPin (54.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (254.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-132223 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0108 13:22:25.842540    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kindnet-130508/client.crt: no such file or directory
E0108 13:22:55.369254    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/enable-default-cni-130508/client.crt: no such file or directory
E0108 13:22:55.375609    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/enable-default-cni-130508/client.crt: no such file or directory
E0108 13:22:55.387740    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/enable-default-cni-130508/client.crt: no such file or directory
E0108 13:22:55.409855    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/enable-default-cni-130508/client.crt: no such file or directory
E0108 13:22:55.450654    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/enable-default-cni-130508/client.crt: no such file or directory
E0108 13:22:55.532157    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/enable-default-cni-130508/client.crt: no such file or directory
E0108 13:22:55.692950    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/enable-default-cni-130508/client.crt: no such file or directory
E0108 13:22:56.013205    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/enable-default-cni-130508/client.crt: no such file or directory
E0108 13:22:56.654287    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/enable-default-cni-130508/client.crt: no such file or directory
E0108 13:22:57.934759    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/enable-default-cni-130508/client.crt: no such file or directory
E0108 13:23:00.494959    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/enable-default-cni-130508/client.crt: no such file or directory
E0108 13:23:05.230724    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/auto-130508/client.crt: no such file or directory
E0108 13:23:05.615669    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/enable-default-cni-130508/client.crt: no such file or directory
E0108 13:23:06.803109    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kindnet-130508/client.crt: no such file or directory
E0108 13:23:15.855858    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/enable-default-cni-130508/client.crt: no such file or directory
E0108 13:23:36.336309    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/enable-default-cni-130508/client.crt: no such file or directory
E0108 13:23:59.407113    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/client.crt: no such file or directory
E0108 13:23:59.412278    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/client.crt: no such file or directory
E0108 13:23:59.423333    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/client.crt: no such file or directory
E0108 13:23:59.443388    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/client.crt: no such file or directory
E0108 13:23:59.484292    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/client.crt: no such file or directory
E0108 13:23:59.566449    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/client.crt: no such file or directory
E0108 13:23:59.726756    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/client.crt: no such file or directory
E0108 13:24:00.048633    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/client.crt: no such file or directory
E0108 13:24:00.689388    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/client.crt: no such file or directory
E0108 13:24:01.969652    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/client.crt: no such file or directory
E0108 13:24:04.530218    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/client.crt: no such file or directory
E0108 13:24:09.651487    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/client.crt: no such file or directory
E0108 13:24:17.297809    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/enable-default-cni-130508/client.crt: no such file or directory
E0108 13:24:19.892516    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/client.crt: no such file or directory
E0108 13:24:28.723744    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kindnet-130508/client.crt: no such file or directory
E0108 13:24:40.373220    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/client.crt: no such file or directory
E0108 13:24:40.717468    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/skaffold-130345/client.crt: no such file or directory
E0108 13:24:42.717510    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
E0108 13:24:59.662120    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
E0108 13:25:03.060551    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/bridge-130508/client.crt: no such file or directory
E0108 13:25:03.066427    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/bridge-130508/client.crt: no such file or directory
E0108 13:25:03.077984    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/bridge-130508/client.crt: no such file or directory
E0108 13:25:03.100169    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/bridge-130508/client.crt: no such file or directory
E0108 13:25:03.140603    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/bridge-130508/client.crt: no such file or directory
E0108 13:25:03.220885    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/bridge-130508/client.crt: no such file or directory
E0108 13:25:03.383013    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/bridge-130508/client.crt: no such file or directory
E0108 13:25:03.703108    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/bridge-130508/client.crt: no such file or directory
E0108 13:25:04.343275    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/bridge-130508/client.crt: no such file or directory
E0108 13:25:05.623729    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/bridge-130508/client.crt: no such file or directory
E0108 13:25:08.184037    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/bridge-130508/client.crt: no such file or directory
E0108 13:25:10.040430    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubenet-130508/client.crt: no such file or directory
E0108 13:25:10.046189    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubenet-130508/client.crt: no such file or directory
E0108 13:25:10.056388    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubenet-130508/client.crt: no such file or directory
E0108 13:25:10.076858    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubenet-130508/client.crt: no such file or directory
E0108 13:25:10.117162    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubenet-130508/client.crt: no such file or directory
E0108 13:25:10.197260    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubenet-130508/client.crt: no such file or directory
E0108 13:25:10.357407    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubenet-130508/client.crt: no such file or directory
E0108 13:25:10.679502    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubenet-130508/client.crt: no such file or directory
E0108 13:25:11.320402    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubenet-130508/client.crt: no such file or directory
E0108 13:25:12.601177    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubenet-130508/client.crt: no such file or directory
E0108 13:25:13.304518    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/bridge-130508/client.crt: no such file or directory
E0108 13:25:15.161487    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubenet-130508/client.crt: no such file or directory
E0108 13:25:16.949613    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
E0108 13:25:20.281736    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubenet-130508/client.crt: no such file or directory
E0108 13:25:21.335397    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/client.crt: no such file or directory
E0108 13:25:21.382892    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/auto-130508/client.crt: no such file or directory
E0108 13:25:23.544953    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/bridge-130508/client.crt: no such file or directory
E0108 13:25:30.522432    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubenet-130508/client.crt: no such file or directory
E0108 13:25:39.218994    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/enable-default-cni-130508/client.crt: no such file or directory
E0108 13:25:44.026829    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/bridge-130508/client.crt: no such file or directory
E0108 13:25:49.071707    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/auto-130508/client.crt: no such file or directory
E0108 13:25:51.002970    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubenet-130508/client.crt: no such file or directory
E0108 13:26:03.761908    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/skaffold-130345/client.crt: no such file or directory
E0108 13:26:24.988729    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/bridge-130508/client.crt: no such file or directory
E0108 13:26:31.963915    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubenet-130508/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-132223 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (4m13.656095055s)

                                                
                                                
-- stdout --
	* [old-k8s-version-132223] minikube v1.28.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2761/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-132223 in cluster old-k8s-version-132223
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.21 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 13:22:23.891727   17792 out.go:296] Setting OutFile to fd 1 ...
	I0108 13:22:23.891941   17792 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 13:22:23.891947   17792 out.go:309] Setting ErrFile to fd 2...
	I0108 13:22:23.891951   17792 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 13:22:23.892096   17792 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-2761/.minikube/bin
	I0108 13:22:23.892651   17792 out.go:303] Setting JSON to false
	I0108 13:22:23.911472   17792 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4916,"bootTime":1673208027,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0108 13:22:23.911569   17792 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0108 13:22:23.950162   17792 out.go:177] * [old-k8s-version-132223] minikube v1.28.0 on Darwin 13.0.1
	I0108 13:22:23.987924   17792 notify.go:220] Checking for updates...
	I0108 13:22:24.025797   17792 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 13:22:24.085571   17792 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2761/kubeconfig
	I0108 13:22:24.145040   17792 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0108 13:22:24.204000   17792 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 13:22:24.262766   17792 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube
	I0108 13:22:24.284435   17792 config.go:180] Loaded profile config "calico-130509": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 13:22:24.284521   17792 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 13:22:24.356807   17792 docker.go:137] docker version: linux-20.10.21
	I0108 13:22:24.357028   17792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 13:22:24.519464   17792 info.go:266] docker info: {ID:5ZGC:6Z7C:CMVQ:QZDS:ZWTS:M4CQ:373D:RAJX:R4QB:QI3D:27UQ:FR5B Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-08 21:22:24.412781125 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0108 13:22:24.542653   17792 out.go:177] * Using the docker driver based on user configuration
	I0108 13:22:24.562773   17792 start.go:294] selected driver: docker
	I0108 13:22:24.562794   17792 start.go:838] validating driver "docker" against <nil>
	I0108 13:22:24.562823   17792 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 13:22:24.565583   17792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 13:22:24.719542   17792 info.go:266] docker info: {ID:5ZGC:6Z7C:CMVQ:QZDS:ZWTS:M4CQ:373D:RAJX:R4QB:QI3D:27UQ:FR5B Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-08 21:22:24.62017986 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/loc
al/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0108 13:22:24.719661   17792 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I0108 13:22:24.719806   17792 start_flags.go:910] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 13:22:24.741815   17792 out.go:177] * Using Docker Desktop driver with root privileges
	I0108 13:22:24.763127   17792 cni.go:95] Creating CNI manager for ""
	I0108 13:22:24.763147   17792 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0108 13:22:24.763160   17792 start_flags.go:317] config:
	{Name:old-k8s-version-132223 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-132223 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 13:22:24.800463   17792 out.go:177] * Starting control plane node old-k8s-version-132223 in cluster old-k8s-version-132223
	I0108 13:22:24.838451   17792 cache.go:120] Beginning downloading kic base image for docker with docker
	I0108 13:22:24.860104   17792 out.go:177] * Pulling base image ...
	I0108 13:22:24.902335   17792 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 13:22:24.902379   17792 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0108 13:22:24.902455   17792 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0108 13:22:24.902478   17792 cache.go:57] Caching tarball of preloaded images
	I0108 13:22:24.903266   17792 preload.go:174] Found /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 13:22:24.903475   17792 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0108 13:22:24.903943   17792 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/config.json ...
	I0108 13:22:24.904030   17792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/config.json: {Name:mk4599a97386d0b34865a6550b072e86bae01e42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:22:24.965893   17792 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 13:22:24.965915   17792 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 13:22:24.965933   17792 cache.go:193] Successfully downloaded all kic artifacts
	I0108 13:22:24.965977   17792 start.go:364] acquiring machines lock for old-k8s-version-132223: {Name:mk8b4ad291c6c90d0dd57640fcf4c9826481575b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 13:22:24.966155   17792 start.go:368] acquired machines lock for "old-k8s-version-132223" in 165.408µs
	I0108 13:22:24.966188   17792 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-132223 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-132223 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0108 13:22:24.966251   17792 start.go:125] createHost starting for "" (driver="docker")
	I0108 13:22:25.025267   17792 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0108 13:22:25.025484   17792 start.go:159] libmachine.API.Create for "old-k8s-version-132223" (driver="docker")
	I0108 13:22:25.025520   17792 client.go:168] LocalClient.Create starting
	I0108 13:22:25.025657   17792 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem
	I0108 13:22:25.025709   17792 main.go:134] libmachine: Decoding PEM data...
	I0108 13:22:25.025727   17792 main.go:134] libmachine: Parsing certificate...
	I0108 13:22:25.025790   17792 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem
	I0108 13:22:25.025823   17792 main.go:134] libmachine: Decoding PEM data...
	I0108 13:22:25.025834   17792 main.go:134] libmachine: Parsing certificate...
	I0108 13:22:25.026264   17792 cli_runner.go:164] Run: docker network inspect old-k8s-version-132223 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0108 13:22:25.087876   17792 cli_runner.go:211] docker network inspect old-k8s-version-132223 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0108 13:22:25.087985   17792 network_create.go:272] running [docker network inspect old-k8s-version-132223] to gather additional debugging logs...
	I0108 13:22:25.088004   17792 cli_runner.go:164] Run: docker network inspect old-k8s-version-132223
	W0108 13:22:25.143762   17792 cli_runner.go:211] docker network inspect old-k8s-version-132223 returned with exit code 1
	I0108 13:22:25.143787   17792 network_create.go:275] error running [docker network inspect old-k8s-version-132223]: docker network inspect old-k8s-version-132223: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-132223
	I0108 13:22:25.143800   17792 network_create.go:277] output of [docker network inspect old-k8s-version-132223]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-132223
	
	** /stderr **
	I0108 13:22:25.143894   17792 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 13:22:25.200779   17792 network.go:306] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000d04b18] misses:0}
	I0108 13:22:25.200818   17792 network.go:239] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0108 13:22:25.200835   17792 network_create.go:115] attempt to create docker network old-k8s-version-132223 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0108 13:22:25.200932   17792 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-132223 old-k8s-version-132223
	W0108 13:22:25.258251   17792 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-132223 old-k8s-version-132223 returned with exit code 1
	W0108 13:22:25.258291   17792 network_create.go:107] failed to create docker network old-k8s-version-132223 192.168.49.0/24, will retry: subnet is taken
	I0108 13:22:25.258546   17792 network.go:297] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000d04b18] amended:false}} dirty:map[] misses:0}
	I0108 13:22:25.258566   17792 network.go:242] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0108 13:22:25.258793   17792 network.go:306] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000d04b18] amended:true}} dirty:map[192.168.49.0:0xc000d04b18 192.168.58.0:0xc000c4f610] misses:0}
	I0108 13:22:25.258805   17792 network.go:239] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0108 13:22:25.258815   17792 network_create.go:115] attempt to create docker network old-k8s-version-132223 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0108 13:22:25.258902   17792 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-132223 old-k8s-version-132223
	W0108 13:22:25.315255   17792 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-132223 old-k8s-version-132223 returned with exit code 1
	W0108 13:22:25.315292   17792 network_create.go:107] failed to create docker network old-k8s-version-132223 192.168.58.0/24, will retry: subnet is taken
	I0108 13:22:25.315555   17792 network.go:297] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000d04b18] amended:true}} dirty:map[192.168.49.0:0xc000d04b18 192.168.58.0:0xc000c4f610] misses:1}
	I0108 13:22:25.315572   17792 network.go:242] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0108 13:22:25.315845   17792 network.go:306] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000d04b18] amended:true}} dirty:map[192.168.49.0:0xc000d04b18 192.168.58.0:0xc000c4f610 192.168.67.0:0xc000b08520] misses:1}
	I0108 13:22:25.315861   17792 network.go:239] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0108 13:22:25.315870   17792 network_create.go:115] attempt to create docker network old-k8s-version-132223 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0108 13:22:25.315966   17792 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-132223 old-k8s-version-132223
	W0108 13:22:25.377700   17792 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-132223 old-k8s-version-132223 returned with exit code 1
	W0108 13:22:25.377740   17792 network_create.go:107] failed to create docker network old-k8s-version-132223 192.168.67.0/24, will retry: subnet is taken
	I0108 13:22:25.378017   17792 network.go:297] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000d04b18] amended:true}} dirty:map[192.168.49.0:0xc000d04b18 192.168.58.0:0xc000c4f610 192.168.67.0:0xc000b08520] misses:2}
	I0108 13:22:25.378037   17792 network.go:242] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0108 13:22:25.378275   17792 network.go:306] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000d04b18] amended:true}} dirty:map[192.168.49.0:0xc000d04b18 192.168.58.0:0xc000c4f610 192.168.67.0:0xc000b08520 192.168.76.0:0xc000c4f228] misses:2}
	I0108 13:22:25.378299   17792 network.go:239] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0108 13:22:25.378307   17792 network_create.go:115] attempt to create docker network old-k8s-version-132223 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0108 13:22:25.378392   17792 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-132223 old-k8s-version-132223
	I0108 13:22:25.490426   17792 network_create.go:99] docker network old-k8s-version-132223 192.168.76.0/24 created
	I0108 13:22:25.490523   17792 kic.go:106] calculated static IP "192.168.76.2" for the "old-k8s-version-132223" container
	I0108 13:22:25.490667   17792 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0108 13:22:25.558811   17792 cli_runner.go:164] Run: docker volume create old-k8s-version-132223 --label name.minikube.sigs.k8s.io=old-k8s-version-132223 --label created_by.minikube.sigs.k8s.io=true
	I0108 13:22:25.618838   17792 oci.go:103] Successfully created a docker volume old-k8s-version-132223
	I0108 13:22:25.618990   17792 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-132223-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-132223 --entrypoint /usr/bin/test -v old-k8s-version-132223:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -d /var/lib
	I0108 13:22:26.086349   17792 oci.go:107] Successfully prepared a docker volume old-k8s-version-132223
	I0108 13:22:26.086382   17792 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0108 13:22:26.086397   17792 kic.go:179] Starting extracting preloaded images to volume ...
	I0108 13:22:26.086531   17792 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-132223:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir
	I0108 13:22:33.975564   17792 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-132223:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir: (7.888864038s)
	I0108 13:22:33.975589   17792 kic.go:188] duration metric: took 7.889156 seconds to extract preloaded images to volume
	I0108 13:22:33.975744   17792 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0108 13:22:34.136172   17792 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-132223 --name old-k8s-version-132223 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-132223 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-132223 --network old-k8s-version-132223 --ip 192.168.76.2 --volume old-k8s-version-132223:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c
	I0108 13:22:34.590317   17792 cli_runner.go:164] Run: docker container inspect old-k8s-version-132223 --format={{.State.Running}}
	I0108 13:22:34.662657   17792 cli_runner.go:164] Run: docker container inspect old-k8s-version-132223 --format={{.State.Status}}
	I0108 13:22:34.728269   17792 cli_runner.go:164] Run: docker exec old-k8s-version-132223 stat /var/lib/dpkg/alternatives/iptables
	I0108 13:22:34.869761   17792 oci.go:144] the created container "old-k8s-version-132223" has a running status.
	I0108 13:22:34.869793   17792 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/old-k8s-version-132223/id_rsa...
	I0108 13:22:34.930428   17792 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/old-k8s-version-132223/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0108 13:22:35.052046   17792 cli_runner.go:164] Run: docker container inspect old-k8s-version-132223 --format={{.State.Status}}
	I0108 13:22:35.122108   17792 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0108 13:22:35.122169   17792 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-132223 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0108 13:22:35.251373   17792 cli_runner.go:164] Run: docker container inspect old-k8s-version-132223 --format={{.State.Status}}
	I0108 13:22:35.312792   17792 machine.go:88] provisioning docker machine ...
	I0108 13:22:35.312839   17792 ubuntu.go:169] provisioning hostname "old-k8s-version-132223"
	I0108 13:22:35.312940   17792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132223
	I0108 13:22:35.384940   17792 main.go:134] libmachine: Using SSH client type: native
	I0108 13:22:35.385151   17792 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 53837 <nil> <nil>}
	I0108 13:22:35.385165   17792 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-132223 && echo "old-k8s-version-132223" | sudo tee /etc/hostname
	I0108 13:22:35.521565   17792 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-132223
	
	I0108 13:22:35.521686   17792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132223
	I0108 13:22:35.591524   17792 main.go:134] libmachine: Using SSH client type: native
	I0108 13:22:35.591707   17792 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 53837 <nil> <nil>}
	I0108 13:22:35.591722   17792 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-132223' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-132223/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-132223' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 13:22:35.715191   17792 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 13:22:35.715238   17792 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-2761/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-2761/.minikube}
	I0108 13:22:35.715273   17792 ubuntu.go:177] setting up certificates
	I0108 13:22:35.715287   17792 provision.go:83] configureAuth start
	I0108 13:22:35.715441   17792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-132223
	I0108 13:22:35.786168   17792 provision.go:138] copyHostCerts
	I0108 13:22:35.786270   17792 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem, removing ...
	I0108 13:22:35.786279   17792 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem
	I0108 13:22:35.786483   17792 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem (1082 bytes)
	I0108 13:22:35.786692   17792 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem, removing ...
	I0108 13:22:35.786698   17792 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem
	I0108 13:22:35.786775   17792 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem (1123 bytes)
	I0108 13:22:35.786944   17792 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem, removing ...
	I0108 13:22:35.786950   17792 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem
	I0108 13:22:35.787024   17792 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem (1675 bytes)
	I0108 13:22:35.787158   17792 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-132223 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-132223]
	I0108 13:22:35.909607   17792 provision.go:172] copyRemoteCerts
	I0108 13:22:35.909677   17792 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 13:22:35.909744   17792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132223
	I0108 13:22:35.983377   17792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53837 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/old-k8s-version-132223/id_rsa Username:docker}
	I0108 13:22:36.069945   17792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 13:22:36.088776   17792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0108 13:22:36.107522   17792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 13:22:36.126328   17792 provision.go:86] duration metric: configureAuth took 411.023067ms
	I0108 13:22:36.126350   17792 ubuntu.go:193] setting minikube options for container-runtime
	I0108 13:22:36.126528   17792 config.go:180] Loaded profile config "old-k8s-version-132223": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0108 13:22:36.126615   17792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132223
	I0108 13:22:36.190226   17792 main.go:134] libmachine: Using SSH client type: native
	I0108 13:22:36.190430   17792 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 53837 <nil> <nil>}
	I0108 13:22:36.190442   17792 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0108 13:22:36.310793   17792 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0108 13:22:36.310821   17792 ubuntu.go:71] root file system type: overlay
	I0108 13:22:36.310990   17792 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0108 13:22:36.311104   17792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132223
	I0108 13:22:36.379645   17792 main.go:134] libmachine: Using SSH client type: native
	I0108 13:22:36.380508   17792 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 53837 <nil> <nil>}
	I0108 13:22:36.380602   17792 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0108 13:22:36.509831   17792 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0108 13:22:36.509937   17792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132223
	I0108 13:22:36.570549   17792 main.go:134] libmachine: Using SSH client type: native
	I0108 13:22:36.570708   17792 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 53837 <nil> <nil>}
	I0108 13:22:36.570720   17792 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0108 13:22:37.226209   17792 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-10-25 18:00:04.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-08 21:22:36.506912482 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0108 13:22:37.226258   17792 machine.go:91] provisioned docker machine in 1.913438414s
	I0108 13:22:37.226265   17792 client.go:171] LocalClient.Create took 12.200687693s
	I0108 13:22:37.226284   17792 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-132223" took 12.200747785s
	I0108 13:22:37.226297   17792 start.go:300] post-start starting for "old-k8s-version-132223" (driver="docker")
	I0108 13:22:37.226302   17792 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 13:22:37.226387   17792 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 13:22:37.226462   17792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132223
	I0108 13:22:37.295415   17792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53837 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/old-k8s-version-132223/id_rsa Username:docker}
	I0108 13:22:37.384226   17792 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 13:22:37.388257   17792 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 13:22:37.388274   17792 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 13:22:37.388281   17792 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 13:22:37.388287   17792 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 13:22:37.388297   17792 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2761/.minikube/addons for local assets ...
	I0108 13:22:37.388393   17792 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2761/.minikube/files for local assets ...
	I0108 13:22:37.388589   17792 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem -> 40832.pem in /etc/ssl/certs
	I0108 13:22:37.388809   17792 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 13:22:37.396616   17792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem --> /etc/ssl/certs/40832.pem (1708 bytes)
	I0108 13:22:37.415191   17792 start.go:303] post-start completed in 188.882673ms
	I0108 13:22:37.416197   17792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-132223
	I0108 13:22:37.487538   17792 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/config.json ...
	I0108 13:22:37.487978   17792 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 13:22:37.488046   17792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132223
	I0108 13:22:37.556237   17792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53837 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/old-k8s-version-132223/id_rsa Username:docker}
	I0108 13:22:37.646490   17792 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 13:22:37.654479   17792 start.go:128] duration metric: createHost completed in 12.688154591s
	I0108 13:22:37.654508   17792 start.go:83] releasing machines lock for "old-k8s-version-132223", held for 12.688286274s
	I0108 13:22:37.654757   17792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-132223
	I0108 13:22:37.718168   17792 ssh_runner.go:195] Run: cat /version.json
	I0108 13:22:37.718194   17792 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0108 13:22:37.718271   17792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132223
	I0108 13:22:37.718313   17792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132223
	I0108 13:22:37.790200   17792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53837 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/old-k8s-version-132223/id_rsa Username:docker}
	I0108 13:22:37.790211   17792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53837 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/old-k8s-version-132223/id_rsa Username:docker}
	I0108 13:22:37.874642   17792 ssh_runner.go:195] Run: systemctl --version
	I0108 13:22:38.132272   17792 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0108 13:22:38.144554   17792 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0108 13:22:38.144667   17792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 13:22:38.156716   17792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 13:22:38.171541   17792 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0108 13:22:38.237794   17792 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0108 13:22:38.310722   17792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 13:22:38.380764   17792 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 13:22:38.655356   17792 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 13:22:38.695575   17792 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 13:22:38.804711   17792 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.21 ...
	I0108 13:22:38.804827   17792 cli_runner.go:164] Run: docker exec -t old-k8s-version-132223 dig +short host.docker.internal
	I0108 13:22:38.925544   17792 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0108 13:22:38.925675   17792 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0108 13:22:38.930353   17792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 13:22:38.942051   17792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-132223
	I0108 13:22:39.004776   17792 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0108 13:22:39.004885   17792 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 13:22:39.030150   17792 docker.go:613] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0108 13:22:39.030168   17792 docker.go:543] Images already preloaded, skipping extraction
	I0108 13:22:39.030274   17792 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 13:22:39.055251   17792 docker.go:613] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0108 13:22:39.055269   17792 cache_images.go:84] Images are preloaded, skipping loading
	I0108 13:22:39.055372   17792 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0108 13:22:39.129443   17792 cni.go:95] Creating CNI manager for ""
	I0108 13:22:39.129462   17792 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0108 13:22:39.129476   17792 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 13:22:39.129507   17792 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-132223 NodeName:old-k8s-version-132223 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 13:22:39.129618   17792 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-132223"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-132223
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 13:22:39.129700   17792 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-132223 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-132223 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 13:22:39.129781   17792 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0108 13:22:39.138180   17792 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 13:22:39.138246   17792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 13:22:39.145794   17792 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0108 13:22:39.158655   17792 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 13:22:39.171710   17792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0108 13:22:39.185153   17792 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0108 13:22:39.189332   17792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 13:22:39.199519   17792 certs.go:54] Setting up /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223 for IP: 192.168.76.2
	I0108 13:22:39.199660   17792 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.key
	I0108 13:22:39.199729   17792 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.key
	I0108 13:22:39.199786   17792 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/client.key
	I0108 13:22:39.199803   17792 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/client.crt with IP's: []
	I0108 13:22:39.389387   17792 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/client.crt ...
	I0108 13:22:39.389407   17792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/client.crt: {Name:mk6cf372ae9eb8de3ca8232399d58d9379bf6999 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:22:39.389735   17792 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/client.key ...
	I0108 13:22:39.389744   17792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/client.key: {Name:mk797b87be20c3b398ded2c5c0b60d18653e4a4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:22:39.389993   17792 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/apiserver.key.31bdca25
	I0108 13:22:39.390014   17792 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 13:22:39.613966   17792 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/apiserver.crt.31bdca25 ...
	I0108 13:22:39.613982   17792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/apiserver.crt.31bdca25: {Name:mk6de7974d213c9b05df64350e5c1a819191dfc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:22:39.614290   17792 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/apiserver.key.31bdca25 ...
	I0108 13:22:39.614299   17792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/apiserver.key.31bdca25: {Name:mk94b7f9a501091cc5735973c16d7cea251cd81b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:22:39.614496   17792 certs.go:320] copying /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/apiserver.crt
	I0108 13:22:39.614663   17792 certs.go:324] copying /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/apiserver.key
	I0108 13:22:39.614833   17792 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/proxy-client.key
	I0108 13:22:39.614852   17792 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/proxy-client.crt with IP's: []
	I0108 13:22:39.740429   17792 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/proxy-client.crt ...
	I0108 13:22:39.740443   17792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/proxy-client.crt: {Name:mkc8ed0767ad8131c748af892ea20409e5895b66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:22:39.740714   17792 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/proxy-client.key ...
	I0108 13:22:39.740722   17792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/proxy-client.key: {Name:mk789cb6962213d4f60987ff52c2c810e72d58ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:22:39.741148   17792 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083.pem (1338 bytes)
	W0108 13:22:39.741199   17792 certs.go:384] ignoring /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083_empty.pem, impossibly tiny 0 bytes
	I0108 13:22:39.741222   17792 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 13:22:39.741259   17792 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem (1082 bytes)
	I0108 13:22:39.741292   17792 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem (1123 bytes)
	I0108 13:22:39.741326   17792 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem (1675 bytes)
	I0108 13:22:39.741395   17792 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem (1708 bytes)
	I0108 13:22:39.741918   17792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 13:22:39.761104   17792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 13:22:39.779004   17792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 13:22:39.797013   17792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 13:22:39.814797   17792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 13:22:39.832675   17792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 13:22:39.850627   17792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 13:22:39.872396   17792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 13:22:39.890827   17792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083.pem --> /usr/share/ca-certificates/4083.pem (1338 bytes)
	I0108 13:22:39.909888   17792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem --> /usr/share/ca-certificates/40832.pem (1708 bytes)
	I0108 13:22:39.928022   17792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 13:22:39.947359   17792 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 13:22:39.960979   17792 ssh_runner.go:195] Run: openssl version
	I0108 13:22:39.966615   17792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4083.pem && ln -fs /usr/share/ca-certificates/4083.pem /etc/ssl/certs/4083.pem"
	I0108 13:22:39.975085   17792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4083.pem
	I0108 13:22:39.979424   17792 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:32 /usr/share/ca-certificates/4083.pem
	I0108 13:22:39.979485   17792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4083.pem
	I0108 13:22:39.985199   17792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4083.pem /etc/ssl/certs/51391683.0"
	I0108 13:22:39.993567   17792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/40832.pem && ln -fs /usr/share/ca-certificates/40832.pem /etc/ssl/certs/40832.pem"
	I0108 13:22:40.002369   17792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/40832.pem
	I0108 13:22:40.006494   17792 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:32 /usr/share/ca-certificates/40832.pem
	I0108 13:22:40.006545   17792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/40832.pem
	I0108 13:22:40.012889   17792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/40832.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 13:22:40.021419   17792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 13:22:40.029995   17792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 13:22:40.034133   17792 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:27 /usr/share/ca-certificates/minikubeCA.pem
	I0108 13:22:40.034185   17792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 13:22:40.039808   17792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 13:22:40.048190   17792 kubeadm.go:396] StartCluster: {Name:old-k8s-version-132223 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-132223 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 13:22:40.048306   17792 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 13:22:40.071763   17792 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 13:22:40.080554   17792 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 13:22:40.088341   17792 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 13:22:40.088403   17792 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 13:22:40.096353   17792 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 13:22:40.096387   17792 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 13:22:40.149446   17792 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I0108 13:22:40.149498   17792 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 13:22:40.475093   17792 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 13:22:40.475218   17792 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 13:22:40.475352   17792 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 13:22:40.783556   17792 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 13:22:40.783663   17792 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 13:22:40.790432   17792 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I0108 13:22:40.919315   17792 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 13:22:40.941697   17792 out.go:204]   - Generating certificates and keys ...
	I0108 13:22:40.941832   17792 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 13:22:40.941917   17792 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 13:22:41.027123   17792 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 13:22:41.200786   17792 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I0108 13:22:41.486483   17792 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I0108 13:22:41.647161   17792 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I0108 13:22:42.107781   17792 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I0108 13:22:42.108128   17792 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-132223 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0108 13:22:42.308825   17792 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I0108 13:22:42.308959   17792 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-132223 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0108 13:22:42.395214   17792 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 13:22:42.484558   17792 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 13:22:42.529803   17792 kubeadm.go:317] [certs] Generating "sa" key and public key
	I0108 13:22:42.529913   17792 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 13:22:42.591338   17792 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 13:22:42.722955   17792 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 13:22:42.910781   17792 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 13:22:43.082842   17792 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 13:22:43.083584   17792 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 13:22:43.106114   17792 out.go:204]   - Booting up control plane ...
	I0108 13:22:43.106301   17792 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 13:22:43.106457   17792 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 13:22:43.106610   17792 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 13:22:43.106811   17792 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 13:22:43.107204   17792 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 13:23:23.093356   17792 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0108 13:23:23.093847   17792 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 13:23:23.093999   17792 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 13:23:28.095935   17792 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 13:23:28.096165   17792 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 13:23:38.098207   17792 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 13:23:38.098424   17792 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 13:23:58.099077   17792 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 13:23:58.099269   17792 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 13:24:38.100146   17792 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 13:24:38.100315   17792 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 13:24:38.100323   17792 kubeadm.go:317] 
	I0108 13:24:38.100367   17792 kubeadm.go:317] Unfortunately, an error has occurred:
	I0108 13:24:38.100414   17792 kubeadm.go:317] 	timed out waiting for the condition
	I0108 13:24:38.100423   17792 kubeadm.go:317] 
	I0108 13:24:38.100468   17792 kubeadm.go:317] This error is likely caused by:
	I0108 13:24:38.100523   17792 kubeadm.go:317] 	- The kubelet is not running
	I0108 13:24:38.100616   17792 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0108 13:24:38.100625   17792 kubeadm.go:317] 
	I0108 13:24:38.100715   17792 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0108 13:24:38.100798   17792 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0108 13:24:38.100834   17792 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0108 13:24:38.100841   17792 kubeadm.go:317] 
	I0108 13:24:38.100947   17792 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0108 13:24:38.101019   17792 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0108 13:24:38.101096   17792 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I0108 13:24:38.101127   17792 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I0108 13:24:38.101175   17792 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0108 13:24:38.101199   17792 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I0108 13:24:38.104055   17792 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0108 13:24:38.104192   17792 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
	I0108 13:24:38.104277   17792 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 13:24:38.104338   17792 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0108 13:24:38.104396   17792 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W0108 13:24:38.104563   17792 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-132223 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-132223 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-132223 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-132223 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0108 13:24:38.104589   17792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0108 13:24:38.521429   17792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 13:24:38.531493   17792 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 13:24:38.531559   17792 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 13:24:38.539267   17792 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 13:24:38.539296   17792 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 13:24:38.587701   17792 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I0108 13:24:38.587745   17792 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 13:24:38.886787   17792 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 13:24:38.886897   17792 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 13:24:38.886977   17792 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 13:24:39.110632   17792 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 13:24:39.111391   17792 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 13:24:39.118524   17792 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I0108 13:24:39.186453   17792 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 13:24:39.208217   17792 out.go:204]   - Generating certificates and keys ...
	I0108 13:24:39.208359   17792 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 13:24:39.208442   17792 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 13:24:39.208553   17792 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 13:24:39.208685   17792 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0108 13:24:39.208751   17792 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 13:24:39.208823   17792 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0108 13:24:39.208889   17792 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0108 13:24:39.208979   17792 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0108 13:24:39.209055   17792 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 13:24:39.209127   17792 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 13:24:39.209160   17792 kubeadm.go:317] [certs] Using the existing "sa" key
	I0108 13:24:39.209208   17792 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 13:24:39.345498   17792 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 13:24:39.597810   17792 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 13:24:39.735603   17792 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 13:24:39.871890   17792 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 13:24:39.872567   17792 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 13:24:39.894131   17792 out.go:204]   - Booting up control plane ...
	I0108 13:24:39.894472   17792 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 13:24:39.894594   17792 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 13:24:39.894698   17792 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 13:24:39.894927   17792 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 13:24:39.895305   17792 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 13:25:19.880753   17792 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0108 13:25:19.881216   17792 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 13:25:19.881447   17792 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 13:25:24.882236   17792 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 13:25:24.882440   17792 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 13:25:34.883505   17792 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 13:25:34.883718   17792 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 13:25:54.884905   17792 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 13:25:54.885125   17792 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 13:26:34.886435   17792 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 13:26:34.886668   17792 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 13:26:34.886681   17792 kubeadm.go:317] 
	I0108 13:26:34.886744   17792 kubeadm.go:317] Unfortunately, an error has occurred:
	I0108 13:26:34.886789   17792 kubeadm.go:317] 	timed out waiting for the condition
	I0108 13:26:34.886798   17792 kubeadm.go:317] 
	I0108 13:26:34.886841   17792 kubeadm.go:317] This error is likely caused by:
	I0108 13:26:34.886880   17792 kubeadm.go:317] 	- The kubelet is not running
	I0108 13:26:34.887012   17792 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0108 13:26:34.887024   17792 kubeadm.go:317] 
	I0108 13:26:34.887167   17792 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0108 13:26:34.887219   17792 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0108 13:26:34.887261   17792 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0108 13:26:34.887271   17792 kubeadm.go:317] 
	I0108 13:26:34.887389   17792 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0108 13:26:34.887499   17792 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0108 13:26:34.887634   17792 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I0108 13:26:34.887716   17792 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I0108 13:26:34.887790   17792 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0108 13:26:34.887824   17792 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I0108 13:26:34.890640   17792 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0108 13:26:34.890751   17792 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
	I0108 13:26:34.890839   17792 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 13:26:34.890912   17792 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0108 13:26:34.890979   17792 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I0108 13:26:34.891010   17792 kubeadm.go:398] StartCluster complete in 3m54.841796608s
	I0108 13:26:34.891113   17792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:26:34.915671   17792 logs.go:274] 0 containers: []
	W0108 13:26:34.915684   17792 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:26:34.915773   17792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:26:34.939871   17792 logs.go:274] 0 containers: []
	W0108 13:26:34.939885   17792 logs.go:276] No container was found matching "etcd"
	I0108 13:26:34.939972   17792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:26:34.963340   17792 logs.go:274] 0 containers: []
	W0108 13:26:34.963354   17792 logs.go:276] No container was found matching "coredns"
	I0108 13:26:34.963437   17792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:26:34.986736   17792 logs.go:274] 0 containers: []
	W0108 13:26:34.986750   17792 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:26:34.986830   17792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:26:35.009865   17792 logs.go:274] 0 containers: []
	W0108 13:26:35.009880   17792 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:26:35.009973   17792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:26:35.033302   17792 logs.go:274] 0 containers: []
	W0108 13:26:35.033316   17792 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:26:35.033403   17792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:26:35.057245   17792 logs.go:274] 0 containers: []
	W0108 13:26:35.057262   17792 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:26:35.057346   17792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:26:35.084844   17792 logs.go:274] 0 containers: []
	W0108 13:26:35.084864   17792 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:26:35.084875   17792 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:26:35.084888   17792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:26:35.145634   17792 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:26:35.145648   17792 logs.go:123] Gathering logs for Docker ...
	I0108 13:26:35.145654   17792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:26:35.162222   17792 logs.go:123] Gathering logs for container status ...
	I0108 13:26:35.162241   17792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:26:37.212693   17792 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050430387s)
	I0108 13:26:37.212798   17792 logs.go:123] Gathering logs for kubelet ...
	I0108 13:26:37.212805   17792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:26:37.250444   17792 logs.go:123] Gathering logs for dmesg ...
	I0108 13:26:37.250460   17792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0108 13:26:37.263439   17792 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0108 13:26:37.263457   17792 out.go:239] * 
	* 
	W0108 13:26:37.263587   17792 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0108 13:26:37.263602   17792 out.go:239] * 
	* 
	W0108 13:26:37.264306   17792 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 13:26:37.326866   17792 out.go:177] 
	W0108 13:26:37.368944   17792 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0108 13:26:37.369015   17792 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0108 13:26:37.369054   17792 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0108 13:26:37.410683   17792 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-132223 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-132223
helpers_test.go:235: (dbg) docker inspect old-k8s-version-132223:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f",
	        "Created": "2023-01-08T21:22:34.19825588Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 250233,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-08T21:22:34.581261089Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f/hostname",
	        "HostsPath": "/var/lib/docker/containers/76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f/hosts",
	        "LogPath": "/var/lib/docker/containers/76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f/76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f-json.log",
	        "Name": "/old-k8s-version-132223",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-132223:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-132223",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0669df26397a17aef68e70fb93bc70433c707e50b0b2a2cf95ad87e2e8d05b77-init/diff:/var/lib/docker/overlay2/cf478f0005761c12f45c53e8731191461bd51878189b802beb3f80527bc3582c/diff:/var/lib/docker/overlay2/50547848ed232979e0349fdf0641681247e43e6ddcd120dbefccdce45eba4793/diff:/var/lib/docker/overlay2/7a8415f97e49b013d35a8b27eaf2a6be470c2a985fcd6de4711cb0018f555a3d/diff:/var/lib/docker/overlay2/435dd0b905de8bd2d6b23782418e6d76b0957f55123fe106e3b62d08c0f3da13/diff:/var/lib/docker/overlay2/70ca2e846954d00d296abfcdcefb0db4959d8ce6650e54b1071b655f7c71c823/diff:/var/lib/docker/overlay2/62715d50ae74531df8ef33be95bc933c79334fbfa0ace0bad5efc678fb43d860/diff:/var/lib/docker/overlay2/857f757c27b37807332ef8a52061b2e02614567dadd8631c9414bcf1e51c7eb6/diff:/var/lib/docker/overlay2/d3d508987063e3e43530c93ff3bb9fc842f7f56e79f9babdb9a3608990dc911e/diff:/var/lib/docker/overlay2/b9307635c9b780f8ea6af04393e82329578be8ced22abd92633ac5912ce752d7/diff:/var/lib/docker/overlay2/ab3124
e34a60bd3d2f554d712f9db28fed57b9030105f996b2a40b6c5c68e6a3/diff:/var/lib/docker/overlay2/2664538922f7cea7eec3238db144935f7380d439e3aaf6611f7f6232515b6c70/diff:/var/lib/docker/overlay2/fcf4ff3c9f738d263ccde0d59a8f0bbbf77d5fe10a37a0b64782c90258c52f05/diff:/var/lib/docker/overlay2/9ebb5fb88ffad88aca62110ea1902a046eb8d27eab4d1b03380f2799a61190e4/diff:/var/lib/docker/overlay2/16c6977d1dcb3aef6968fa378be9d39da565962707fb1c2ebcc08741b3ebabb0/diff:/var/lib/docker/overlay2/4a1a615ba2290b96a2289b3709f9e4e2b7585a7880463549ed90c765c1cf364b/diff:/var/lib/docker/overlay2/8875d4ae4e008b8ed7a6c64b581bc9a7437e20bc59a10db038658c3c3abbd626/diff:/var/lib/docker/overlay2/a92bc2bed5e566a6a12e091f0b6adcc5120ec1a5a04a079614da38b8e08b4f4d/diff:/var/lib/docker/overlay2/507f4a1c4f60a4445244bd4611fbdebeda31c842886f650aff0c93fe1cbf551b/diff:/var/lib/docker/overlay2/4b6f57707d2af391e02b8fbab74a152c38778d850194db7c366c972d607c3683/diff:/var/lib/docker/overlay2/30f07cc70078d1a1064ae4c014017806ca9cab561445ba4999d279d77ab9efd9/diff:/var/lib/d
ocker/overlay2/a7ce66498ad28650a9c447ffdd1776688091a1f96a77ba104690bbd632828084/diff:/var/lib/docker/overlay2/375e879a1c9abf773aadafa9214b4cd6a5fa848c3521ded951069c1ef16d03c8/diff:/var/lib/docker/overlay2/dbf6bd39c4440680d1fb7dcfc66134acd119d818a0da224feea03b15985518ef/diff:/var/lib/docker/overlay2/f5247f50460095d94d94f10c8f29a1106915f3f694a40dbc0ff0a7494ceef2d6/diff:/var/lib/docker/overlay2/eca77ea4b87f19d3e4b6258b307c944a60d8a11e38e520715736d86cfcb0a340/diff:/var/lib/docker/overlay2/af8edadcadb813c9b8bcb395db5b7025128f75336edf043daf159e86115fa2d0/diff:/var/lib/docker/overlay2/82696f404a416ef0c49184f767d3a67d76997ca4b3ab9f2553ab364b9e902189/diff:/var/lib/docker/overlay2/aa5f3a92ab78aa13af6b0e4ca676e887e32b388ad037098956622b2bb2d64653/diff:/var/lib/docker/overlay2/3fd93bd37311284bcd588f06d2e1157fcae183e793e58b9e91af55526752251b/diff:/var/lib/docker/overlay2/5cac080397d4de235a72e46ee68fdd622d9fba1dbd60139a59881df7cb97cdd3/diff:/var/lib/docker/overlay2/1534f7a89f3f0459a57d2264ddb9c4b2e95b9348c6c3fb6839c3f2cd1aa
7009a/diff:/var/lib/docker/overlay2/0fa983ab9147631e9188574a597cbb1ada8bd69b4eff49391c9704d239988f73/diff:/var/lib/docker/overlay2/2ff1f973faf98b7d46648d22c4c0cb73675d5b3f37e6906c457a45823a29fe1e/diff:/var/lib/docker/overlay2/1d56ab53b6c377c5835e50d09effb1a1a727279cb8883e5d4cda8c35b4600695/diff:/var/lib/docker/overlay2/903da5933dc4be1a0f9e38defe40072a669562fc25c401b8b9a02def3b94bec6/diff:/var/lib/docker/overlay2/4be7777ae41ce96ae10877862b8954fa1ee593061f9647f30de2ccdd036bb452/diff:/var/lib/docker/overlay2/ae284268a6cd8a67190129d99bdb6a97d27c88bfe4536cbdf20bc356c6cb5ad4/diff:/var/lib/docker/overlay2/207f47b4e74ecca6010612742ebe5cd0c8363dd1634d58f37b9df57cefc063f2/diff:/var/lib/docker/overlay2/65d59701773a038dc5533dece8ebc52ebf3efc833e94c91c470d1f6593bdf196/diff:/var/lib/docker/overlay2/3ae8859886568a0e539b79f17ace58f390ab402b4428c45188c2587640d73f10/diff:/var/lib/docker/overlay2/bf63d45714e6f77ee9a5cf0fd198e479af953d7ea25a6f1f76633e63bd9b827f/diff:/var/lib/docker/overlay2/ac8c76daac6f3c2d9c8ceee7ed9defe04f1a31
f0271684f4258c0f634ed1fce1/diff:/var/lib/docker/overlay2/1cd45a0f7910466989a7434f8eec249f0e295b686baad0e434a2d34dd6e82a47/diff:/var/lib/docker/overlay2/d72980245e92027e64b68ee0fc086b48f102ea405ffbebfd8220036fdbe805d6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0669df26397a17aef68e70fb93bc70433c707e50b0b2a2cf95ad87e2e8d05b77/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0669df26397a17aef68e70fb93bc70433c707e50b0b2a2cf95ad87e2e8d05b77/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0669df26397a17aef68e70fb93bc70433c707e50b0b2a2cf95ad87e2e8d05b77/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-132223",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-132223/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-132223",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-132223",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-132223",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a44b5ea4f63d69763ec6750681e431c8debb39754fe2757cf04ba1e607f16602",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53837"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53838"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53839"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53840"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53841"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a44b5ea4f63d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-132223": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "76595a40dec8",
	                        "old-k8s-version-132223"
	                    ],
	                    "NetworkID": "8205ca6e86e721bc270dfbf0384edb3c10ca81d0afb1c6b7756a52514e9f6e59",
	                    "EndpointID": "41ddf3b13cfa4a16143d03b6bb44700afdc095d631dbb5cf33615d94747de308",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-132223 -n old-k8s-version-132223
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-132223 -n old-k8s-version-132223: exit status 6 (415.166296ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 13:26:37.968020   18113 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-132223" does not appear in /Users/jenkins/minikube-integration/15565-2761/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-132223" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (254.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-132223 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-132223 create -f testdata/busybox.yaml: exit status 1 (35.785495ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-132223" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-132223 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-132223
helpers_test.go:235: (dbg) docker inspect old-k8s-version-132223:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f",
	        "Created": "2023-01-08T21:22:34.19825588Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 250233,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-08T21:22:34.581261089Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f/hostname",
	        "HostsPath": "/var/lib/docker/containers/76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f/hosts",
	        "LogPath": "/var/lib/docker/containers/76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f/76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f-json.log",
	        "Name": "/old-k8s-version-132223",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-132223:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-132223",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0669df26397a17aef68e70fb93bc70433c707e50b0b2a2cf95ad87e2e8d05b77-init/diff:/var/lib/docker/overlay2/cf478f0005761c12f45c53e8731191461bd51878189b802beb3f80527bc3582c/diff:/var/lib/docker/overlay2/50547848ed232979e0349fdf0641681247e43e6ddcd120dbefccdce45eba4793/diff:/var/lib/docker/overlay2/7a8415f97e49b013d35a8b27eaf2a6be470c2a985fcd6de4711cb0018f555a3d/diff:/var/lib/docker/overlay2/435dd0b905de8bd2d6b23782418e6d76b0957f55123fe106e3b62d08c0f3da13/diff:/var/lib/docker/overlay2/70ca2e846954d00d296abfcdcefb0db4959d8ce6650e54b1071b655f7c71c823/diff:/var/lib/docker/overlay2/62715d50ae74531df8ef33be95bc933c79334fbfa0ace0bad5efc678fb43d860/diff:/var/lib/docker/overlay2/857f757c27b37807332ef8a52061b2e02614567dadd8631c9414bcf1e51c7eb6/diff:/var/lib/docker/overlay2/d3d508987063e3e43530c93ff3bb9fc842f7f56e79f9babdb9a3608990dc911e/diff:/var/lib/docker/overlay2/b9307635c9b780f8ea6af04393e82329578be8ced22abd92633ac5912ce752d7/diff:/var/lib/docker/overlay2/ab3124
e34a60bd3d2f554d712f9db28fed57b9030105f996b2a40b6c5c68e6a3/diff:/var/lib/docker/overlay2/2664538922f7cea7eec3238db144935f7380d439e3aaf6611f7f6232515b6c70/diff:/var/lib/docker/overlay2/fcf4ff3c9f738d263ccde0d59a8f0bbbf77d5fe10a37a0b64782c90258c52f05/diff:/var/lib/docker/overlay2/9ebb5fb88ffad88aca62110ea1902a046eb8d27eab4d1b03380f2799a61190e4/diff:/var/lib/docker/overlay2/16c6977d1dcb3aef6968fa378be9d39da565962707fb1c2ebcc08741b3ebabb0/diff:/var/lib/docker/overlay2/4a1a615ba2290b96a2289b3709f9e4e2b7585a7880463549ed90c765c1cf364b/diff:/var/lib/docker/overlay2/8875d4ae4e008b8ed7a6c64b581bc9a7437e20bc59a10db038658c3c3abbd626/diff:/var/lib/docker/overlay2/a92bc2bed5e566a6a12e091f0b6adcc5120ec1a5a04a079614da38b8e08b4f4d/diff:/var/lib/docker/overlay2/507f4a1c4f60a4445244bd4611fbdebeda31c842886f650aff0c93fe1cbf551b/diff:/var/lib/docker/overlay2/4b6f57707d2af391e02b8fbab74a152c38778d850194db7c366c972d607c3683/diff:/var/lib/docker/overlay2/30f07cc70078d1a1064ae4c014017806ca9cab561445ba4999d279d77ab9efd9/diff:/var/lib/d
ocker/overlay2/a7ce66498ad28650a9c447ffdd1776688091a1f96a77ba104690bbd632828084/diff:/var/lib/docker/overlay2/375e879a1c9abf773aadafa9214b4cd6a5fa848c3521ded951069c1ef16d03c8/diff:/var/lib/docker/overlay2/dbf6bd39c4440680d1fb7dcfc66134acd119d818a0da224feea03b15985518ef/diff:/var/lib/docker/overlay2/f5247f50460095d94d94f10c8f29a1106915f3f694a40dbc0ff0a7494ceef2d6/diff:/var/lib/docker/overlay2/eca77ea4b87f19d3e4b6258b307c944a60d8a11e38e520715736d86cfcb0a340/diff:/var/lib/docker/overlay2/af8edadcadb813c9b8bcb395db5b7025128f75336edf043daf159e86115fa2d0/diff:/var/lib/docker/overlay2/82696f404a416ef0c49184f767d3a67d76997ca4b3ab9f2553ab364b9e902189/diff:/var/lib/docker/overlay2/aa5f3a92ab78aa13af6b0e4ca676e887e32b388ad037098956622b2bb2d64653/diff:/var/lib/docker/overlay2/3fd93bd37311284bcd588f06d2e1157fcae183e793e58b9e91af55526752251b/diff:/var/lib/docker/overlay2/5cac080397d4de235a72e46ee68fdd622d9fba1dbd60139a59881df7cb97cdd3/diff:/var/lib/docker/overlay2/1534f7a89f3f0459a57d2264ddb9c4b2e95b9348c6c3fb6839c3f2cd1aa
7009a/diff:/var/lib/docker/overlay2/0fa983ab9147631e9188574a597cbb1ada8bd69b4eff49391c9704d239988f73/diff:/var/lib/docker/overlay2/2ff1f973faf98b7d46648d22c4c0cb73675d5b3f37e6906c457a45823a29fe1e/diff:/var/lib/docker/overlay2/1d56ab53b6c377c5835e50d09effb1a1a727279cb8883e5d4cda8c35b4600695/diff:/var/lib/docker/overlay2/903da5933dc4be1a0f9e38defe40072a669562fc25c401b8b9a02def3b94bec6/diff:/var/lib/docker/overlay2/4be7777ae41ce96ae10877862b8954fa1ee593061f9647f30de2ccdd036bb452/diff:/var/lib/docker/overlay2/ae284268a6cd8a67190129d99bdb6a97d27c88bfe4536cbdf20bc356c6cb5ad4/diff:/var/lib/docker/overlay2/207f47b4e74ecca6010612742ebe5cd0c8363dd1634d58f37b9df57cefc063f2/diff:/var/lib/docker/overlay2/65d59701773a038dc5533dece8ebc52ebf3efc833e94c91c470d1f6593bdf196/diff:/var/lib/docker/overlay2/3ae8859886568a0e539b79f17ace58f390ab402b4428c45188c2587640d73f10/diff:/var/lib/docker/overlay2/bf63d45714e6f77ee9a5cf0fd198e479af953d7ea25a6f1f76633e63bd9b827f/diff:/var/lib/docker/overlay2/ac8c76daac6f3c2d9c8ceee7ed9defe04f1a31
f0271684f4258c0f634ed1fce1/diff:/var/lib/docker/overlay2/1cd45a0f7910466989a7434f8eec249f0e295b686baad0e434a2d34dd6e82a47/diff:/var/lib/docker/overlay2/d72980245e92027e64b68ee0fc086b48f102ea405ffbebfd8220036fdbe805d6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0669df26397a17aef68e70fb93bc70433c707e50b0b2a2cf95ad87e2e8d05b77/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0669df26397a17aef68e70fb93bc70433c707e50b0b2a2cf95ad87e2e8d05b77/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0669df26397a17aef68e70fb93bc70433c707e50b0b2a2cf95ad87e2e8d05b77/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-132223",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-132223/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-132223",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-132223",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-132223",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a44b5ea4f63d69763ec6750681e431c8debb39754fe2757cf04ba1e607f16602",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53837"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53838"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53839"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53840"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53841"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a44b5ea4f63d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-132223": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "76595a40dec8",
	                        "old-k8s-version-132223"
	                    ],
	                    "NetworkID": "8205ca6e86e721bc270dfbf0384edb3c10ca81d0afb1c6b7756a52514e9f6e59",
	                    "EndpointID": "41ddf3b13cfa4a16143d03b6bb44700afdc095d631dbb5cf33615d94747de308",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-132223 -n old-k8s-version-132223
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-132223 -n old-k8s-version-132223: exit status 6 (423.817438ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 13:26:38.491830   18126 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-132223" does not appear in /Users/jenkins/minikube-integration/15565-2761/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-132223" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-132223
helpers_test.go:235: (dbg) docker inspect old-k8s-version-132223:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f",
	        "Created": "2023-01-08T21:22:34.19825588Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 250233,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-08T21:22:34.581261089Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f/hostname",
	        "HostsPath": "/var/lib/docker/containers/76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f/hosts",
	        "LogPath": "/var/lib/docker/containers/76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f/76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f-json.log",
	        "Name": "/old-k8s-version-132223",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-132223:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-132223",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0669df26397a17aef68e70fb93bc70433c707e50b0b2a2cf95ad87e2e8d05b77-init/diff:/var/lib/docker/overlay2/cf478f0005761c12f45c53e8731191461bd51878189b802beb3f80527bc3582c/diff:/var/lib/docker/overlay2/50547848ed232979e0349fdf0641681247e43e6ddcd120dbefccdce45eba4793/diff:/var/lib/docker/overlay2/7a8415f97e49b013d35a8b27eaf2a6be470c2a985fcd6de4711cb0018f555a3d/diff:/var/lib/docker/overlay2/435dd0b905de8bd2d6b23782418e6d76b0957f55123fe106e3b62d08c0f3da13/diff:/var/lib/docker/overlay2/70ca2e846954d00d296abfcdcefb0db4959d8ce6650e54b1071b655f7c71c823/diff:/var/lib/docker/overlay2/62715d50ae74531df8ef33be95bc933c79334fbfa0ace0bad5efc678fb43d860/diff:/var/lib/docker/overlay2/857f757c27b37807332ef8a52061b2e02614567dadd8631c9414bcf1e51c7eb6/diff:/var/lib/docker/overlay2/d3d508987063e3e43530c93ff3bb9fc842f7f56e79f9babdb9a3608990dc911e/diff:/var/lib/docker/overlay2/b9307635c9b780f8ea6af04393e82329578be8ced22abd92633ac5912ce752d7/diff:/var/lib/docker/overlay2/ab3124
e34a60bd3d2f554d712f9db28fed57b9030105f996b2a40b6c5c68e6a3/diff:/var/lib/docker/overlay2/2664538922f7cea7eec3238db144935f7380d439e3aaf6611f7f6232515b6c70/diff:/var/lib/docker/overlay2/fcf4ff3c9f738d263ccde0d59a8f0bbbf77d5fe10a37a0b64782c90258c52f05/diff:/var/lib/docker/overlay2/9ebb5fb88ffad88aca62110ea1902a046eb8d27eab4d1b03380f2799a61190e4/diff:/var/lib/docker/overlay2/16c6977d1dcb3aef6968fa378be9d39da565962707fb1c2ebcc08741b3ebabb0/diff:/var/lib/docker/overlay2/4a1a615ba2290b96a2289b3709f9e4e2b7585a7880463549ed90c765c1cf364b/diff:/var/lib/docker/overlay2/8875d4ae4e008b8ed7a6c64b581bc9a7437e20bc59a10db038658c3c3abbd626/diff:/var/lib/docker/overlay2/a92bc2bed5e566a6a12e091f0b6adcc5120ec1a5a04a079614da38b8e08b4f4d/diff:/var/lib/docker/overlay2/507f4a1c4f60a4445244bd4611fbdebeda31c842886f650aff0c93fe1cbf551b/diff:/var/lib/docker/overlay2/4b6f57707d2af391e02b8fbab74a152c38778d850194db7c366c972d607c3683/diff:/var/lib/docker/overlay2/30f07cc70078d1a1064ae4c014017806ca9cab561445ba4999d279d77ab9efd9/diff:/var/lib/d
ocker/overlay2/a7ce66498ad28650a9c447ffdd1776688091a1f96a77ba104690bbd632828084/diff:/var/lib/docker/overlay2/375e879a1c9abf773aadafa9214b4cd6a5fa848c3521ded951069c1ef16d03c8/diff:/var/lib/docker/overlay2/dbf6bd39c4440680d1fb7dcfc66134acd119d818a0da224feea03b15985518ef/diff:/var/lib/docker/overlay2/f5247f50460095d94d94f10c8f29a1106915f3f694a40dbc0ff0a7494ceef2d6/diff:/var/lib/docker/overlay2/eca77ea4b87f19d3e4b6258b307c944a60d8a11e38e520715736d86cfcb0a340/diff:/var/lib/docker/overlay2/af8edadcadb813c9b8bcb395db5b7025128f75336edf043daf159e86115fa2d0/diff:/var/lib/docker/overlay2/82696f404a416ef0c49184f767d3a67d76997ca4b3ab9f2553ab364b9e902189/diff:/var/lib/docker/overlay2/aa5f3a92ab78aa13af6b0e4ca676e887e32b388ad037098956622b2bb2d64653/diff:/var/lib/docker/overlay2/3fd93bd37311284bcd588f06d2e1157fcae183e793e58b9e91af55526752251b/diff:/var/lib/docker/overlay2/5cac080397d4de235a72e46ee68fdd622d9fba1dbd60139a59881df7cb97cdd3/diff:/var/lib/docker/overlay2/1534f7a89f3f0459a57d2264ddb9c4b2e95b9348c6c3fb6839c3f2cd1aa
7009a/diff:/var/lib/docker/overlay2/0fa983ab9147631e9188574a597cbb1ada8bd69b4eff49391c9704d239988f73/diff:/var/lib/docker/overlay2/2ff1f973faf98b7d46648d22c4c0cb73675d5b3f37e6906c457a45823a29fe1e/diff:/var/lib/docker/overlay2/1d56ab53b6c377c5835e50d09effb1a1a727279cb8883e5d4cda8c35b4600695/diff:/var/lib/docker/overlay2/903da5933dc4be1a0f9e38defe40072a669562fc25c401b8b9a02def3b94bec6/diff:/var/lib/docker/overlay2/4be7777ae41ce96ae10877862b8954fa1ee593061f9647f30de2ccdd036bb452/diff:/var/lib/docker/overlay2/ae284268a6cd8a67190129d99bdb6a97d27c88bfe4536cbdf20bc356c6cb5ad4/diff:/var/lib/docker/overlay2/207f47b4e74ecca6010612742ebe5cd0c8363dd1634d58f37b9df57cefc063f2/diff:/var/lib/docker/overlay2/65d59701773a038dc5533dece8ebc52ebf3efc833e94c91c470d1f6593bdf196/diff:/var/lib/docker/overlay2/3ae8859886568a0e539b79f17ace58f390ab402b4428c45188c2587640d73f10/diff:/var/lib/docker/overlay2/bf63d45714e6f77ee9a5cf0fd198e479af953d7ea25a6f1f76633e63bd9b827f/diff:/var/lib/docker/overlay2/ac8c76daac6f3c2d9c8ceee7ed9defe04f1a31
f0271684f4258c0f634ed1fce1/diff:/var/lib/docker/overlay2/1cd45a0f7910466989a7434f8eec249f0e295b686baad0e434a2d34dd6e82a47/diff:/var/lib/docker/overlay2/d72980245e92027e64b68ee0fc086b48f102ea405ffbebfd8220036fdbe805d6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0669df26397a17aef68e70fb93bc70433c707e50b0b2a2cf95ad87e2e8d05b77/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0669df26397a17aef68e70fb93bc70433c707e50b0b2a2cf95ad87e2e8d05b77/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0669df26397a17aef68e70fb93bc70433c707e50b0b2a2cf95ad87e2e8d05b77/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-132223",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-132223/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-132223",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-132223",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-132223",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a44b5ea4f63d69763ec6750681e431c8debb39754fe2757cf04ba1e607f16602",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53837"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53838"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53839"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53840"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53841"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a44b5ea4f63d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-132223": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "76595a40dec8",
	                        "old-k8s-version-132223"
	                    ],
	                    "NetworkID": "8205ca6e86e721bc270dfbf0384edb3c10ca81d0afb1c6b7756a52514e9f6e59",
	                    "EndpointID": "41ddf3b13cfa4a16143d03b6bb44700afdc095d631dbb5cf33615d94747de308",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-132223 -n old-k8s-version-132223
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-132223 -n old-k8s-version-132223: exit status 6 (418.138629ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 13:26:38.972636   18140 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-132223" does not appear in /Users/jenkins/minikube-integration/15565-2761/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-132223" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (1.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-132223 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0108 13:26:43.256658    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/client.crt: no such file or directory
E0108 13:26:44.875703    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kindnet-130508/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-132223 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m29.220676499s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-132223 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-132223 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-132223 describe deploy/metrics-server -n kube-system: exit status 1 (35.304164ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-132223" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-132223 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-132223
helpers_test.go:235: (dbg) docker inspect old-k8s-version-132223:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f",
	        "Created": "2023-01-08T21:22:34.19825588Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 250233,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-08T21:22:34.581261089Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f/hostname",
	        "HostsPath": "/var/lib/docker/containers/76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f/hosts",
	        "LogPath": "/var/lib/docker/containers/76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f/76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f-json.log",
	        "Name": "/old-k8s-version-132223",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-132223:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-132223",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0669df26397a17aef68e70fb93bc70433c707e50b0b2a2cf95ad87e2e8d05b77-init/diff:/var/lib/docker/overlay2/cf478f0005761c12f45c53e8731191461bd51878189b802beb3f80527bc3582c/diff:/var/lib/docker/overlay2/50547848ed232979e0349fdf0641681247e43e6ddcd120dbefccdce45eba4793/diff:/var/lib/docker/overlay2/7a8415f97e49b013d35a8b27eaf2a6be470c2a985fcd6de4711cb0018f555a3d/diff:/var/lib/docker/overlay2/435dd0b905de8bd2d6b23782418e6d76b0957f55123fe106e3b62d08c0f3da13/diff:/var/lib/docker/overlay2/70ca2e846954d00d296abfcdcefb0db4959d8ce6650e54b1071b655f7c71c823/diff:/var/lib/docker/overlay2/62715d50ae74531df8ef33be95bc933c79334fbfa0ace0bad5efc678fb43d860/diff:/var/lib/docker/overlay2/857f757c27b37807332ef8a52061b2e02614567dadd8631c9414bcf1e51c7eb6/diff:/var/lib/docker/overlay2/d3d508987063e3e43530c93ff3bb9fc842f7f56e79f9babdb9a3608990dc911e/diff:/var/lib/docker/overlay2/b9307635c9b780f8ea6af04393e82329578be8ced22abd92633ac5912ce752d7/diff:/var/lib/docker/overlay2/ab3124
e34a60bd3d2f554d712f9db28fed57b9030105f996b2a40b6c5c68e6a3/diff:/var/lib/docker/overlay2/2664538922f7cea7eec3238db144935f7380d439e3aaf6611f7f6232515b6c70/diff:/var/lib/docker/overlay2/fcf4ff3c9f738d263ccde0d59a8f0bbbf77d5fe10a37a0b64782c90258c52f05/diff:/var/lib/docker/overlay2/9ebb5fb88ffad88aca62110ea1902a046eb8d27eab4d1b03380f2799a61190e4/diff:/var/lib/docker/overlay2/16c6977d1dcb3aef6968fa378be9d39da565962707fb1c2ebcc08741b3ebabb0/diff:/var/lib/docker/overlay2/4a1a615ba2290b96a2289b3709f9e4e2b7585a7880463549ed90c765c1cf364b/diff:/var/lib/docker/overlay2/8875d4ae4e008b8ed7a6c64b581bc9a7437e20bc59a10db038658c3c3abbd626/diff:/var/lib/docker/overlay2/a92bc2bed5e566a6a12e091f0b6adcc5120ec1a5a04a079614da38b8e08b4f4d/diff:/var/lib/docker/overlay2/507f4a1c4f60a4445244bd4611fbdebeda31c842886f650aff0c93fe1cbf551b/diff:/var/lib/docker/overlay2/4b6f57707d2af391e02b8fbab74a152c38778d850194db7c366c972d607c3683/diff:/var/lib/docker/overlay2/30f07cc70078d1a1064ae4c014017806ca9cab561445ba4999d279d77ab9efd9/diff:/var/lib/d
ocker/overlay2/a7ce66498ad28650a9c447ffdd1776688091a1f96a77ba104690bbd632828084/diff:/var/lib/docker/overlay2/375e879a1c9abf773aadafa9214b4cd6a5fa848c3521ded951069c1ef16d03c8/diff:/var/lib/docker/overlay2/dbf6bd39c4440680d1fb7dcfc66134acd119d818a0da224feea03b15985518ef/diff:/var/lib/docker/overlay2/f5247f50460095d94d94f10c8f29a1106915f3f694a40dbc0ff0a7494ceef2d6/diff:/var/lib/docker/overlay2/eca77ea4b87f19d3e4b6258b307c944a60d8a11e38e520715736d86cfcb0a340/diff:/var/lib/docker/overlay2/af8edadcadb813c9b8bcb395db5b7025128f75336edf043daf159e86115fa2d0/diff:/var/lib/docker/overlay2/82696f404a416ef0c49184f767d3a67d76997ca4b3ab9f2553ab364b9e902189/diff:/var/lib/docker/overlay2/aa5f3a92ab78aa13af6b0e4ca676e887e32b388ad037098956622b2bb2d64653/diff:/var/lib/docker/overlay2/3fd93bd37311284bcd588f06d2e1157fcae183e793e58b9e91af55526752251b/diff:/var/lib/docker/overlay2/5cac080397d4de235a72e46ee68fdd622d9fba1dbd60139a59881df7cb97cdd3/diff:/var/lib/docker/overlay2/1534f7a89f3f0459a57d2264ddb9c4b2e95b9348c6c3fb6839c3f2cd1aa
7009a/diff:/var/lib/docker/overlay2/0fa983ab9147631e9188574a597cbb1ada8bd69b4eff49391c9704d239988f73/diff:/var/lib/docker/overlay2/2ff1f973faf98b7d46648d22c4c0cb73675d5b3f37e6906c457a45823a29fe1e/diff:/var/lib/docker/overlay2/1d56ab53b6c377c5835e50d09effb1a1a727279cb8883e5d4cda8c35b4600695/diff:/var/lib/docker/overlay2/903da5933dc4be1a0f9e38defe40072a669562fc25c401b8b9a02def3b94bec6/diff:/var/lib/docker/overlay2/4be7777ae41ce96ae10877862b8954fa1ee593061f9647f30de2ccdd036bb452/diff:/var/lib/docker/overlay2/ae284268a6cd8a67190129d99bdb6a97d27c88bfe4536cbdf20bc356c6cb5ad4/diff:/var/lib/docker/overlay2/207f47b4e74ecca6010612742ebe5cd0c8363dd1634d58f37b9df57cefc063f2/diff:/var/lib/docker/overlay2/65d59701773a038dc5533dece8ebc52ebf3efc833e94c91c470d1f6593bdf196/diff:/var/lib/docker/overlay2/3ae8859886568a0e539b79f17ace58f390ab402b4428c45188c2587640d73f10/diff:/var/lib/docker/overlay2/bf63d45714e6f77ee9a5cf0fd198e479af953d7ea25a6f1f76633e63bd9b827f/diff:/var/lib/docker/overlay2/ac8c76daac6f3c2d9c8ceee7ed9defe04f1a31
f0271684f4258c0f634ed1fce1/diff:/var/lib/docker/overlay2/1cd45a0f7910466989a7434f8eec249f0e295b686baad0e434a2d34dd6e82a47/diff:/var/lib/docker/overlay2/d72980245e92027e64b68ee0fc086b48f102ea405ffbebfd8220036fdbe805d6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0669df26397a17aef68e70fb93bc70433c707e50b0b2a2cf95ad87e2e8d05b77/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0669df26397a17aef68e70fb93bc70433c707e50b0b2a2cf95ad87e2e8d05b77/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0669df26397a17aef68e70fb93bc70433c707e50b0b2a2cf95ad87e2e8d05b77/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-132223",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-132223/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-132223",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-132223",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-132223",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a44b5ea4f63d69763ec6750681e431c8debb39754fe2757cf04ba1e607f16602",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53837"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53838"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53839"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53840"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53841"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a44b5ea4f63d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-132223": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "76595a40dec8",
	                        "old-k8s-version-132223"
	                    ],
	                    "NetworkID": "8205ca6e86e721bc270dfbf0384edb3c10ca81d0afb1c6b7756a52514e9f6e59",
	                    "EndpointID": "41ddf3b13cfa4a16143d03b6bb44700afdc095d631dbb5cf33615d94747de308",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-132223 -n old-k8s-version-132223
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-132223 -n old-k8s-version-132223: exit status 6 (433.363262ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 13:28:08.721044   18489 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-132223" does not appear in /Users/jenkins/minikube-integration/15565-2761/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-132223" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (490.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-132223 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-132223 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (8m5.771573866s)

                                                
                                                
-- stdout --
	* [old-k8s-version-132223] minikube v1.28.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2761/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube
	* Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-132223 in cluster old-k8s-version-132223
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-132223" ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.21 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 13:28:10.941998   18533 out.go:296] Setting OutFile to fd 1 ...
	I0108 13:28:10.942202   18533 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 13:28:10.942211   18533 out.go:309] Setting ErrFile to fd 2...
	I0108 13:28:10.942215   18533 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 13:28:10.942382   18533 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-2761/.minikube/bin
	I0108 13:28:10.943167   18533 out.go:303] Setting JSON to false
	I0108 13:28:10.967625   18533 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5263,"bootTime":1673208027,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0108 13:28:10.967745   18533 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0108 13:28:10.991805   18533 out.go:177] * [old-k8s-version-132223] minikube v1.28.0 on Darwin 13.0.1
	I0108 13:28:11.066149   18533 notify.go:220] Checking for updates...
	I0108 13:28:11.103740   18533 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 13:28:11.199702   18533 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2761/kubeconfig
	I0108 13:28:11.241908   18533 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0108 13:28:11.300170   18533 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 13:28:11.377898   18533 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube
	I0108 13:28:11.415115   18533 config.go:180] Loaded profile config "old-k8s-version-132223": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0108 13:28:11.437925   18533 out.go:177] * Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	I0108 13:28:11.462829   18533 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 13:28:11.539042   18533 docker.go:137] docker version: linux-20.10.21
	I0108 13:28:11.539288   18533 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 13:28:11.735153   18533 info.go:266] docker info: {ID:5ZGC:6Z7C:CMVQ:QZDS:ZWTS:M4CQ:373D:RAJX:R4QB:QI3D:27UQ:FR5B Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-08 21:28:11.600431825 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0108 13:28:11.777677   18533 out.go:177] * Using the docker driver based on existing profile
	I0108 13:28:11.799559   18533 start.go:294] selected driver: docker
	I0108 13:28:11.799635   18533 start.go:838] validating driver "docker" against &{Name:old-k8s-version-132223 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-132223 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 13:28:11.799821   18533 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 13:28:11.803640   18533 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 13:28:11.951695   18533 info.go:266] docker info: {ID:5ZGC:6Z7C:CMVQ:QZDS:ZWTS:M4CQ:373D:RAJX:R4QB:QI3D:27UQ:FR5B Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-08 21:28:11.856846868 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0108 13:28:11.951854   18533 start_flags.go:910] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 13:28:11.951875   18533 cni.go:95] Creating CNI manager for ""
	I0108 13:28:11.951886   18533 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0108 13:28:11.951898   18533 start_flags.go:317] config:
	{Name:old-k8s-version-132223 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-132223 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 13:28:11.995796   18533 out.go:177] * Starting control plane node old-k8s-version-132223 in cluster old-k8s-version-132223
	I0108 13:28:12.017640   18533 cache.go:120] Beginning downloading kic base image for docker with docker
	I0108 13:28:12.039542   18533 out.go:177] * Pulling base image ...
	I0108 13:28:12.081501   18533 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0108 13:28:12.081507   18533 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 13:28:12.081608   18533 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0108 13:28:12.081625   18533 cache.go:57] Caching tarball of preloaded images
	I0108 13:28:12.081843   18533 preload.go:174] Found /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 13:28:12.081864   18533 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0108 13:28:12.082875   18533 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/config.json ...
	I0108 13:28:12.140500   18533 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 13:28:12.140524   18533 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 13:28:12.140543   18533 cache.go:193] Successfully downloaded all kic artifacts
	I0108 13:28:12.140610   18533 start.go:364] acquiring machines lock for old-k8s-version-132223: {Name:mk8b4ad291c6c90d0dd57640fcf4c9826481575b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 13:28:12.140707   18533 start.go:368] acquired machines lock for "old-k8s-version-132223" in 76.809µs
	I0108 13:28:12.140732   18533 start.go:96] Skipping create...Using existing machine configuration
	I0108 13:28:12.140742   18533 fix.go:55] fixHost starting: 
	I0108 13:28:12.140997   18533 cli_runner.go:164] Run: docker container inspect old-k8s-version-132223 --format={{.State.Status}}
	I0108 13:28:12.199636   18533 fix.go:103] recreateIfNeeded on old-k8s-version-132223: state=Stopped err=<nil>
	W0108 13:28:12.199671   18533 fix.go:129] unexpected machine state, will restart: <nil>
	I0108 13:28:12.221793   18533 out.go:177] * Restarting existing docker container for "old-k8s-version-132223" ...
	I0108 13:28:12.243537   18533 cli_runner.go:164] Run: docker start old-k8s-version-132223
	I0108 13:28:12.604131   18533 cli_runner.go:164] Run: docker container inspect old-k8s-version-132223 --format={{.State.Status}}
	I0108 13:28:12.668107   18533 kic.go:415] container "old-k8s-version-132223" state is running.
	I0108 13:28:12.668736   18533 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-132223
	I0108 13:28:12.734270   18533 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/config.json ...
	I0108 13:28:12.734827   18533 machine.go:88] provisioning docker machine ...
	I0108 13:28:12.734856   18533 ubuntu.go:169] provisioning hostname "old-k8s-version-132223"
	I0108 13:28:12.734954   18533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132223
	I0108 13:28:12.808288   18533 main.go:134] libmachine: Using SSH client type: native
	I0108 13:28:12.808496   18533 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 53990 <nil> <nil>}
	I0108 13:28:12.808510   18533 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-132223 && echo "old-k8s-version-132223" | sudo tee /etc/hostname
	I0108 13:28:12.945008   18533 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-132223
	
	I0108 13:28:12.945129   18533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132223
	I0108 13:28:13.009536   18533 main.go:134] libmachine: Using SSH client type: native
	I0108 13:28:13.009698   18533 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 53990 <nil> <nil>}
	I0108 13:28:13.009711   18533 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-132223' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-132223/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-132223' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 13:28:13.128153   18533 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 13:28:13.128180   18533 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-2761/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-2761/.minikube}
	I0108 13:28:13.128197   18533 ubuntu.go:177] setting up certificates
	I0108 13:28:13.128207   18533 provision.go:83] configureAuth start
	I0108 13:28:13.128291   18533 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-132223
	I0108 13:28:13.188878   18533 provision.go:138] copyHostCerts
	I0108 13:28:13.188979   18533 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem, removing ...
	I0108 13:28:13.188990   18533 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem
	I0108 13:28:13.189124   18533 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem (1082 bytes)
	I0108 13:28:13.189342   18533 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem, removing ...
	I0108 13:28:13.189349   18533 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem
	I0108 13:28:13.189410   18533 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem (1123 bytes)
	I0108 13:28:13.189565   18533 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem, removing ...
	I0108 13:28:13.189571   18533 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem
	I0108 13:28:13.189629   18533 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem (1675 bytes)
	I0108 13:28:13.189756   18533 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-132223 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-132223]
	I0108 13:28:13.250210   18533 provision.go:172] copyRemoteCerts
	I0108 13:28:13.250269   18533 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 13:28:13.250343   18533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132223
	I0108 13:28:13.311704   18533 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53990 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/old-k8s-version-132223/id_rsa Username:docker}
	I0108 13:28:13.399831   18533 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 13:28:13.417248   18533 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0108 13:28:13.434936   18533 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 13:28:13.453215   18533 provision.go:86] duration metric: configureAuth took 324.993149ms
	I0108 13:28:13.453226   18533 ubuntu.go:193] setting minikube options for container-runtime
	I0108 13:28:13.453397   18533 config.go:180] Loaded profile config "old-k8s-version-132223": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0108 13:28:13.453480   18533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132223
	I0108 13:28:13.514205   18533 main.go:134] libmachine: Using SSH client type: native
	I0108 13:28:13.514379   18533 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 53990 <nil> <nil>}
	I0108 13:28:13.514388   18533 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0108 13:28:13.631363   18533 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0108 13:28:13.631391   18533 ubuntu.go:71] root file system type: overlay
	I0108 13:28:13.631545   18533 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0108 13:28:13.631646   18533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132223
	I0108 13:28:13.693427   18533 main.go:134] libmachine: Using SSH client type: native
	I0108 13:28:13.693598   18533 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 53990 <nil> <nil>}
	I0108 13:28:13.693645   18533 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0108 13:28:13.822804   18533 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0108 13:28:13.822920   18533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132223
	I0108 13:28:13.884550   18533 main.go:134] libmachine: Using SSH client type: native
	I0108 13:28:13.884712   18533 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 53990 <nil> <nil>}
	I0108 13:28:13.884731   18533 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0108 13:28:14.007611   18533 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 13:28:14.007628   18533 machine.go:91] provisioned docker machine in 1.272785092s
	I0108 13:28:14.007638   18533 start.go:300] post-start starting for "old-k8s-version-132223" (driver="docker")
	I0108 13:28:14.007643   18533 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 13:28:14.007713   18533 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 13:28:14.007780   18533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132223
	I0108 13:28:14.070814   18533 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53990 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/old-k8s-version-132223/id_rsa Username:docker}
	I0108 13:28:14.159272   18533 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 13:28:14.163526   18533 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 13:28:14.163542   18533 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 13:28:14.163549   18533 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 13:28:14.163554   18533 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 13:28:14.163568   18533 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2761/.minikube/addons for local assets ...
	I0108 13:28:14.163652   18533 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2761/.minikube/files for local assets ...
	I0108 13:28:14.163820   18533 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem -> 40832.pem in /etc/ssl/certs
	I0108 13:28:14.164014   18533 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 13:28:14.171384   18533 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem --> /etc/ssl/certs/40832.pem (1708 bytes)
	I0108 13:28:14.191378   18533 start.go:303] post-start completed in 183.728186ms
	I0108 13:28:14.191498   18533 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 13:28:14.191574   18533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132223
	I0108 13:28:14.263214   18533 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53990 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/old-k8s-version-132223/id_rsa Username:docker}
	I0108 13:28:14.349477   18533 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 13:28:14.356832   18533 fix.go:57] fixHost completed within 2.216079343s
	I0108 13:28:14.356848   18533 start.go:83] releasing machines lock for "old-k8s-version-132223", held for 2.216124347s
	I0108 13:28:14.356954   18533 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-132223
	I0108 13:28:14.421375   18533 ssh_runner.go:195] Run: cat /version.json
	I0108 13:28:14.421389   18533 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0108 13:28:14.421457   18533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132223
	I0108 13:28:14.421491   18533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132223
	I0108 13:28:14.486914   18533 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53990 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/old-k8s-version-132223/id_rsa Username:docker}
	I0108 13:28:14.487109   18533 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53990 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/old-k8s-version-132223/id_rsa Username:docker}
	I0108 13:28:14.570274   18533 ssh_runner.go:195] Run: systemctl --version
	I0108 13:28:14.823155   18533 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0108 13:28:14.834813   18533 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0108 13:28:14.834895   18533 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 13:28:14.848201   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 13:28:14.864226   18533 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0108 13:28:14.935619   18533 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0108 13:28:15.007376   18533 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 13:28:15.074838   18533 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 13:28:15.289525   18533 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 13:28:15.321862   18533 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 13:28:15.400207   18533 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.21 ...
	I0108 13:28:15.400367   18533 cli_runner.go:164] Run: docker exec -t old-k8s-version-132223 dig +short host.docker.internal
	I0108 13:28:15.516889   18533 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0108 13:28:15.517031   18533 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0108 13:28:15.521529   18533 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 13:28:15.531705   18533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-132223
	I0108 13:28:15.593527   18533 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0108 13:28:15.593628   18533 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 13:28:15.617259   18533 docker.go:613] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0108 13:28:15.617277   18533 docker.go:543] Images already preloaded, skipping extraction
	I0108 13:28:15.617398   18533 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 13:28:15.641837   18533 docker.go:613] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0108 13:28:15.641861   18533 cache_images.go:84] Images are preloaded, skipping loading
	I0108 13:28:15.641963   18533 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0108 13:28:15.713217   18533 cni.go:95] Creating CNI manager for ""
	I0108 13:28:15.713239   18533 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0108 13:28:15.713272   18533 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 13:28:15.713307   18533 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-132223 NodeName:old-k8s-version-132223 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 13:28:15.713498   18533 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-132223"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-132223
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 13:28:15.713572   18533 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-132223 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-132223 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 13:28:15.713661   18533 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0108 13:28:15.721671   18533 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 13:28:15.721763   18533 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 13:28:15.729392   18533 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0108 13:28:15.742399   18533 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 13:28:15.755450   18533 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0108 13:28:15.768962   18533 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0108 13:28:15.772961   18533 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 13:28:15.783355   18533 certs.go:54] Setting up /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223 for IP: 192.168.76.2
	I0108 13:28:15.783500   18533 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.key
	I0108 13:28:15.783581   18533 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.key
	I0108 13:28:15.783716   18533 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/client.key
	I0108 13:28:15.783810   18533 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/apiserver.key.31bdca25
	I0108 13:28:15.783889   18533 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/proxy-client.key
	I0108 13:28:15.784179   18533 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083.pem (1338 bytes)
	W0108 13:28:15.784222   18533 certs.go:384] ignoring /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083_empty.pem, impossibly tiny 0 bytes
	I0108 13:28:15.784237   18533 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 13:28:15.784278   18533 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem (1082 bytes)
	I0108 13:28:15.784322   18533 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem (1123 bytes)
	I0108 13:28:15.784360   18533 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem (1675 bytes)
	I0108 13:28:15.784455   18533 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem (1708 bytes)
	I0108 13:28:15.785114   18533 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 13:28:15.803711   18533 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 13:28:15.822008   18533 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 13:28:15.840088   18533 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/old-k8s-version-132223/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 13:28:15.858686   18533 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 13:28:15.876862   18533 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 13:28:15.894464   18533 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 13:28:15.912698   18533 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 13:28:15.930347   18533 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 13:28:15.948529   18533 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083.pem --> /usr/share/ca-certificates/4083.pem (1338 bytes)
	I0108 13:28:15.985872   18533 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem --> /usr/share/ca-certificates/40832.pem (1708 bytes)
	I0108 13:28:16.003136   18533 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 13:28:16.016009   18533 ssh_runner.go:195] Run: openssl version
	I0108 13:28:16.021686   18533 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 13:28:16.030173   18533 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 13:28:16.034140   18533 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:27 /usr/share/ca-certificates/minikubeCA.pem
	I0108 13:28:16.034196   18533 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 13:28:16.039561   18533 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 13:28:16.047937   18533 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4083.pem && ln -fs /usr/share/ca-certificates/4083.pem /etc/ssl/certs/4083.pem"
	I0108 13:28:16.056867   18533 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4083.pem
	I0108 13:28:16.061177   18533 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:32 /usr/share/ca-certificates/4083.pem
	I0108 13:28:16.061233   18533 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4083.pem
	I0108 13:28:16.067134   18533 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4083.pem /etc/ssl/certs/51391683.0"
	I0108 13:28:16.075224   18533 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/40832.pem && ln -fs /usr/share/ca-certificates/40832.pem /etc/ssl/certs/40832.pem"
	I0108 13:28:16.083636   18533 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/40832.pem
	I0108 13:28:16.087581   18533 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:32 /usr/share/ca-certificates/40832.pem
	I0108 13:28:16.087633   18533 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/40832.pem
	I0108 13:28:16.093191   18533 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/40832.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 13:28:16.100955   18533 kubeadm.go:396] StartCluster: {Name:old-k8s-version-132223 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-132223 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 13:28:16.101079   18533 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 13:28:16.123842   18533 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 13:28:16.132094   18533 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0108 13:28:16.132113   18533 kubeadm.go:627] restartCluster start
	I0108 13:28:16.132190   18533 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 13:28:16.141831   18533 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:28:16.141945   18533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-132223
	I0108 13:28:16.203698   18533 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-132223" does not appear in /Users/jenkins/minikube-integration/15565-2761/kubeconfig
	I0108 13:28:16.203881   18533 kubeconfig.go:146] "old-k8s-version-132223" context is missing from /Users/jenkins/minikube-integration/15565-2761/kubeconfig - will repair!
	I0108 13:28:16.204228   18533 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/kubeconfig: {Name:mk71550ab701dee908d8134473648649a6392238 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:28:16.205668   18533 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 13:28:16.213728   18533 api_server.go:165] Checking apiserver status ...
	I0108 13:28:16.213806   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:28:16.222888   18533 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:28:16.423937   18533 api_server.go:165] Checking apiserver status ...
	I0108 13:28:16.424067   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:28:16.435013   18533 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:28:16.623969   18533 api_server.go:165] Checking apiserver status ...
	I0108 13:28:16.624157   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:28:16.637106   18533 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:28:16.823900   18533 api_server.go:165] Checking apiserver status ...
	I0108 13:28:16.824042   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:28:16.834664   18533 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:28:17.023963   18533 api_server.go:165] Checking apiserver status ...
	I0108 13:28:17.024087   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:28:17.034849   18533 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:28:17.223887   18533 api_server.go:165] Checking apiserver status ...
	I0108 13:28:17.224029   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:28:17.234832   18533 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:28:17.423906   18533 api_server.go:165] Checking apiserver status ...
	I0108 13:28:17.424003   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:28:17.434604   18533 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:28:17.623868   18533 api_server.go:165] Checking apiserver status ...
	I0108 13:28:17.623958   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:28:17.634216   18533 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:28:17.823982   18533 api_server.go:165] Checking apiserver status ...
	I0108 13:28:17.824082   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:28:17.833515   18533 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:28:18.024995   18533 api_server.go:165] Checking apiserver status ...
	I0108 13:28:18.025061   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:28:18.034184   18533 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:28:18.224392   18533 api_server.go:165] Checking apiserver status ...
	I0108 13:28:18.224481   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:28:18.233962   18533 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:28:18.423960   18533 api_server.go:165] Checking apiserver status ...
	I0108 13:28:18.424101   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:28:18.434414   18533 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:28:18.623978   18533 api_server.go:165] Checking apiserver status ...
	I0108 13:28:18.624153   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:28:18.635120   18533 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:28:18.823415   18533 api_server.go:165] Checking apiserver status ...
	I0108 13:28:18.823557   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:28:18.834395   18533 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:28:19.023986   18533 api_server.go:165] Checking apiserver status ...
	I0108 13:28:19.024159   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:28:19.035259   18533 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:28:19.223920   18533 api_server.go:165] Checking apiserver status ...
	I0108 13:28:19.224070   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:28:19.234146   18533 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:28:19.234155   18533 api_server.go:165] Checking apiserver status ...
	I0108 13:28:19.234205   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:28:19.242653   18533 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:28:19.242665   18533 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0108 13:28:19.242673   18533 kubeadm.go:1114] stopping kube-system containers ...
	I0108 13:28:19.242755   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 13:28:19.265165   18533 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 13:28:19.275806   18533 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 13:28:19.283664   18533 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5691 Jan  8 21:24 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5731 Jan  8 21:24 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5791 Jan  8 21:24 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5679 Jan  8 21:24 /etc/kubernetes/scheduler.conf
	
	I0108 13:28:19.283740   18533 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0108 13:28:19.291367   18533 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0108 13:28:19.299111   18533 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0108 13:28:19.306701   18533 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0108 13:28:19.314273   18533 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 13:28:19.321878   18533 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 13:28:19.321893   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 13:28:19.379017   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 13:28:19.960990   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 13:28:20.175625   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 13:28:20.244724   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 13:28:20.302953   18533 api_server.go:51] waiting for apiserver process to appear ...
	I0108 13:28:20.303050   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:20.812429   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:21.313338   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:21.812175   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:22.312690   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:22.812497   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:23.312571   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:23.813402   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:24.313425   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:24.812159   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:25.312360   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:25.812386   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:26.312791   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:26.814320   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:27.312425   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:27.812651   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:28.312440   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:28.814420   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:29.312236   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:29.812688   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:30.312405   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:30.812346   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:31.312388   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:31.812626   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:32.312728   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:32.812574   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:33.313485   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:33.812615   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:34.313734   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:34.813733   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:35.312469   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:35.812723   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:36.314380   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:36.812399   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:37.314365   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:37.812799   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:38.312396   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:38.813404   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:39.312611   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:39.812393   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:40.312433   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:40.813584   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:41.314291   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:41.812225   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:42.312404   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:42.812265   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:43.312401   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:43.814410   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:44.312682   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:44.814474   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:45.312595   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:45.812466   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:46.312484   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:46.812501   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:47.312296   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:47.812778   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:48.312548   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:48.812862   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:49.314429   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:49.814400   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:50.312459   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:50.812930   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:51.312441   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:51.812840   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:52.313108   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:52.813266   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:53.312538   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:53.814517   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:54.314452   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:54.812685   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:55.313268   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:55.812746   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:56.313002   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:56.813362   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:57.312562   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:57.812478   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:58.313000   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:58.813433   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:59.312697   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:28:59.813142   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:00.314005   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:00.812613   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:01.313873   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:01.812433   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:02.312617   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:02.813774   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:03.312707   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:03.812502   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:04.313685   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:04.813182   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:05.312566   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:05.812480   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:06.312608   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:06.812810   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:07.312875   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:07.812801   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:08.312768   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:08.813834   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:09.313654   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:09.814515   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:10.313601   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:10.814534   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:11.312512   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:11.813420   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:12.313556   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:12.812561   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:13.313466   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:13.814157   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:14.313761   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:14.812698   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:15.314507   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:15.812605   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:16.312610   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:16.812519   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:17.312596   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:17.812901   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:18.313995   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:18.812697   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:19.314637   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:19.813300   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:20.312752   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:29:20.339659   18533 logs.go:274] 0 containers: []
	W0108 13:29:20.339677   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:29:20.339779   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:29:20.364001   18533 logs.go:274] 0 containers: []
	W0108 13:29:20.364015   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:29:20.364097   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:29:20.421878   18533 logs.go:274] 0 containers: []
	W0108 13:29:20.421892   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:29:20.421982   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:29:20.445823   18533 logs.go:274] 0 containers: []
	W0108 13:29:20.445837   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:29:20.445927   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:29:20.468326   18533 logs.go:274] 0 containers: []
	W0108 13:29:20.468339   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:29:20.468429   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:29:20.492096   18533 logs.go:274] 0 containers: []
	W0108 13:29:20.492110   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:29:20.492194   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:29:20.515012   18533 logs.go:274] 0 containers: []
	W0108 13:29:20.515026   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:29:20.515124   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:29:20.538403   18533 logs.go:274] 0 containers: []
	W0108 13:29:20.538417   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:29:20.538423   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:29:20.538431   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:29:20.550441   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:29:20.550460   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:29:20.608785   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:29:20.608800   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:29:20.608806   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:29:20.623171   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:29:20.623185   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:29:22.671585   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048377625s)
	I0108 13:29:22.671726   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:29:22.671734   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:29:25.209674   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:25.313238   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:29:25.338699   18533 logs.go:274] 0 containers: []
	W0108 13:29:25.338713   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:29:25.338799   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:29:25.361358   18533 logs.go:274] 0 containers: []
	W0108 13:29:25.361372   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:29:25.361461   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:29:25.384238   18533 logs.go:274] 0 containers: []
	W0108 13:29:25.384259   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:29:25.384365   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:29:25.407009   18533 logs.go:274] 0 containers: []
	W0108 13:29:25.407024   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:29:25.407105   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:29:25.430241   18533 logs.go:274] 0 containers: []
	W0108 13:29:25.430257   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:29:25.430345   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:29:25.453836   18533 logs.go:274] 0 containers: []
	W0108 13:29:25.453851   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:29:25.453936   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:29:25.477589   18533 logs.go:274] 0 containers: []
	W0108 13:29:25.477605   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:29:25.477702   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:29:25.501198   18533 logs.go:274] 0 containers: []
	W0108 13:29:25.501213   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:29:25.501222   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:29:25.501233   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:29:25.540409   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:29:25.540424   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:29:25.552578   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:29:25.552592   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:29:25.612896   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:29:25.612911   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:29:25.612918   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:29:25.628433   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:29:25.628449   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:29:27.678742   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050245012s)
	I0108 13:29:30.179614   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:30.312738   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:29:30.339917   18533 logs.go:274] 0 containers: []
	W0108 13:29:30.339931   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:29:30.340023   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:29:30.363033   18533 logs.go:274] 0 containers: []
	W0108 13:29:30.363047   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:29:30.363137   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:29:30.386631   18533 logs.go:274] 0 containers: []
	W0108 13:29:30.386645   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:29:30.386726   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:29:30.410032   18533 logs.go:274] 0 containers: []
	W0108 13:29:30.410046   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:29:30.410130   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:29:30.433527   18533 logs.go:274] 0 containers: []
	W0108 13:29:30.433542   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:29:30.433626   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:29:30.457909   18533 logs.go:274] 0 containers: []
	W0108 13:29:30.457938   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:29:30.458018   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:29:30.481343   18533 logs.go:274] 0 containers: []
	W0108 13:29:30.481358   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:29:30.481440   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:29:30.505495   18533 logs.go:274] 0 containers: []
	W0108 13:29:30.505512   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:29:30.505520   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:29:30.505527   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:29:30.546586   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:29:30.546602   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:29:30.559054   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:29:30.559069   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:29:30.614889   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:29:30.614901   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:29:30.614908   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:29:30.630156   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:29:30.630170   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:29:32.681801   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05161085s)
	I0108 13:29:35.182333   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:35.312668   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:29:35.339905   18533 logs.go:274] 0 containers: []
	W0108 13:29:35.339918   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:29:35.340002   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:29:35.363727   18533 logs.go:274] 0 containers: []
	W0108 13:29:35.363741   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:29:35.363835   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:29:35.389225   18533 logs.go:274] 0 containers: []
	W0108 13:29:35.389252   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:29:35.389352   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:29:35.434680   18533 logs.go:274] 0 containers: []
	W0108 13:29:35.434694   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:29:35.434786   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:29:35.458090   18533 logs.go:274] 0 containers: []
	W0108 13:29:35.458103   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:29:35.458191   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:29:35.483131   18533 logs.go:274] 0 containers: []
	W0108 13:29:35.483145   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:29:35.483226   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:29:35.506663   18533 logs.go:274] 0 containers: []
	W0108 13:29:35.506677   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:29:35.506763   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:29:35.529067   18533 logs.go:274] 0 containers: []
	W0108 13:29:35.529080   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:29:35.529087   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:29:35.529094   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:29:35.568826   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:29:35.568880   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:29:35.581461   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:29:35.581476   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:29:35.637590   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:29:35.637606   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:29:35.637612   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:29:35.652094   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:29:35.652107   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:29:37.701047   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048918993s)
	I0108 13:29:40.202041   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:40.313701   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:29:40.338820   18533 logs.go:274] 0 containers: []
	W0108 13:29:40.338834   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:29:40.338924   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:29:40.361723   18533 logs.go:274] 0 containers: []
	W0108 13:29:40.361737   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:29:40.361819   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:29:40.384348   18533 logs.go:274] 0 containers: []
	W0108 13:29:40.384362   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:29:40.384453   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:29:40.408075   18533 logs.go:274] 0 containers: []
	W0108 13:29:40.408092   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:29:40.408178   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:29:40.431441   18533 logs.go:274] 0 containers: []
	W0108 13:29:40.431456   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:29:40.431541   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:29:40.454116   18533 logs.go:274] 0 containers: []
	W0108 13:29:40.454131   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:29:40.454216   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:29:40.477812   18533 logs.go:274] 0 containers: []
	W0108 13:29:40.477826   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:29:40.477909   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:29:40.500293   18533 logs.go:274] 0 containers: []
	W0108 13:29:40.500315   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:29:40.500322   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:29:40.500329   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:29:42.552342   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051987904s)
	I0108 13:29:42.552458   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:29:42.552466   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:29:42.590063   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:29:42.590076   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:29:42.602648   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:29:42.602661   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:29:42.657113   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:29:42.657127   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:29:42.657133   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:29:45.171057   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:45.314665   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:29:45.341619   18533 logs.go:274] 0 containers: []
	W0108 13:29:45.341639   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:29:45.341724   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:29:45.364058   18533 logs.go:274] 0 containers: []
	W0108 13:29:45.364072   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:29:45.364157   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:29:45.387705   18533 logs.go:274] 0 containers: []
	W0108 13:29:45.387719   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:29:45.387804   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:29:45.410768   18533 logs.go:274] 0 containers: []
	W0108 13:29:45.410781   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:29:45.410863   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:29:45.433242   18533 logs.go:274] 0 containers: []
	W0108 13:29:45.433256   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:29:45.433343   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:29:45.456203   18533 logs.go:274] 0 containers: []
	W0108 13:29:45.456218   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:29:45.456306   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:29:45.481934   18533 logs.go:274] 0 containers: []
	W0108 13:29:45.481948   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:29:45.482034   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:29:45.504750   18533 logs.go:274] 0 containers: []
	W0108 13:29:45.504763   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:29:45.504770   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:29:45.504777   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:29:45.543429   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:29:45.543448   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:29:45.555754   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:29:45.555768   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:29:45.613327   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:29:45.613347   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:29:45.613357   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:29:45.627558   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:29:45.627570   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:29:47.676870   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049278989s)
	I0108 13:29:50.178497   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:50.314736   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:29:50.340932   18533 logs.go:274] 0 containers: []
	W0108 13:29:50.340944   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:29:50.341034   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:29:50.364496   18533 logs.go:274] 0 containers: []
	W0108 13:29:50.364514   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:29:50.364601   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:29:50.419990   18533 logs.go:274] 0 containers: []
	W0108 13:29:50.420007   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:29:50.420154   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:29:50.443887   18533 logs.go:274] 0 containers: []
	W0108 13:29:50.443903   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:29:50.443994   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:29:50.467742   18533 logs.go:274] 0 containers: []
	W0108 13:29:50.467759   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:29:50.467843   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:29:50.491425   18533 logs.go:274] 0 containers: []
	W0108 13:29:50.491438   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:29:50.491524   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:29:50.515229   18533 logs.go:274] 0 containers: []
	W0108 13:29:50.515243   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:29:50.515331   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:29:50.538475   18533 logs.go:274] 0 containers: []
	W0108 13:29:50.538489   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:29:50.538496   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:29:50.538503   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:29:50.577730   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:29:50.577749   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:29:50.590185   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:29:50.590198   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:29:50.646424   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:29:50.646437   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:29:50.646443   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:29:50.660921   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:29:50.660955   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:29:52.711295   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050314732s)
	I0108 13:29:55.212248   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:29:55.314744   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:29:55.340305   18533 logs.go:274] 0 containers: []
	W0108 13:29:55.340318   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:29:55.340416   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:29:55.363712   18533 logs.go:274] 0 containers: []
	W0108 13:29:55.363727   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:29:55.363809   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:29:55.386077   18533 logs.go:274] 0 containers: []
	W0108 13:29:55.386089   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:29:55.386176   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:29:55.408643   18533 logs.go:274] 0 containers: []
	W0108 13:29:55.408657   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:29:55.408745   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:29:55.431520   18533 logs.go:274] 0 containers: []
	W0108 13:29:55.431534   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:29:55.431620   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:29:55.455547   18533 logs.go:274] 0 containers: []
	W0108 13:29:55.455560   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:29:55.455643   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:29:55.480455   18533 logs.go:274] 0 containers: []
	W0108 13:29:55.480469   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:29:55.480558   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:29:55.504362   18533 logs.go:274] 0 containers: []
	W0108 13:29:55.504379   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:29:55.504388   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:29:55.504397   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:29:55.543762   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:29:55.543779   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:29:55.556497   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:29:55.556526   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:29:55.614139   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:29:55.614152   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:29:55.614158   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:29:55.630361   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:29:55.630376   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:29:57.678238   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047839245s)
	I0108 13:30:00.178846   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:30:00.313028   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:30:00.339785   18533 logs.go:274] 0 containers: []
	W0108 13:30:00.339799   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:30:00.339892   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:30:00.362787   18533 logs.go:274] 0 containers: []
	W0108 13:30:00.362801   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:30:00.362891   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:30:00.385906   18533 logs.go:274] 0 containers: []
	W0108 13:30:00.385920   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:30:00.386004   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:30:00.408437   18533 logs.go:274] 0 containers: []
	W0108 13:30:00.408452   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:30:00.408537   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:30:00.432195   18533 logs.go:274] 0 containers: []
	W0108 13:30:00.432209   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:30:00.432296   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:30:00.455375   18533 logs.go:274] 0 containers: []
	W0108 13:30:00.455391   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:30:00.455489   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:30:00.477709   18533 logs.go:274] 0 containers: []
	W0108 13:30:00.477724   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:30:00.477805   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:30:00.500585   18533 logs.go:274] 0 containers: []
	W0108 13:30:00.500603   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:30:00.500611   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:30:00.500617   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:30:00.541314   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:30:00.541334   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:30:00.554535   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:30:00.554551   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:30:00.610355   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:30:00.610373   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:30:00.610380   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:30:00.624961   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:30:00.624977   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:30:02.675392   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050391291s)
	I0108 13:30:05.175820   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:30:05.313033   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:30:05.338879   18533 logs.go:274] 0 containers: []
	W0108 13:30:05.338893   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:30:05.338979   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:30:05.362879   18533 logs.go:274] 0 containers: []
	W0108 13:30:05.362898   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:30:05.363010   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:30:05.389384   18533 logs.go:274] 0 containers: []
	W0108 13:30:05.389403   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:30:05.389502   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:30:05.439513   18533 logs.go:274] 0 containers: []
	W0108 13:30:05.439526   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:30:05.439608   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:30:05.462248   18533 logs.go:274] 0 containers: []
	W0108 13:30:05.462263   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:30:05.462347   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:30:05.487485   18533 logs.go:274] 0 containers: []
	W0108 13:30:05.487500   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:30:05.487581   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:30:05.510915   18533 logs.go:274] 0 containers: []
	W0108 13:30:05.510928   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:30:05.511009   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:30:05.533990   18533 logs.go:274] 0 containers: []
	W0108 13:30:05.534003   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:30:05.534011   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:30:05.534018   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:30:05.572928   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:30:05.572949   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:30:05.586529   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:30:05.586544   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:30:05.642612   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:30:05.642623   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:30:05.642630   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:30:05.657361   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:30:05.657375   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:30:07.707525   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050127034s)
	I0108 13:30:10.207892   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:30:10.314254   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:30:10.340099   18533 logs.go:274] 0 containers: []
	W0108 13:30:10.340113   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:30:10.340195   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:30:10.363879   18533 logs.go:274] 0 containers: []
	W0108 13:30:10.363894   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:30:10.363992   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:30:10.387681   18533 logs.go:274] 0 containers: []
	W0108 13:30:10.387694   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:30:10.387777   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:30:10.411113   18533 logs.go:274] 0 containers: []
	W0108 13:30:10.411127   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:30:10.411228   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:30:10.433946   18533 logs.go:274] 0 containers: []
	W0108 13:30:10.433960   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:30:10.434036   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:30:10.457085   18533 logs.go:274] 0 containers: []
	W0108 13:30:10.457101   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:30:10.457186   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:30:10.482341   18533 logs.go:274] 0 containers: []
	W0108 13:30:10.482356   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:30:10.482449   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:30:10.507340   18533 logs.go:274] 0 containers: []
	W0108 13:30:10.507355   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:30:10.507363   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:30:10.507371   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:30:10.521687   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:30:10.521701   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:30:12.569886   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048162574s)
	I0108 13:30:12.570004   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:30:12.570012   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:30:12.607598   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:30:12.607616   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:30:12.619944   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:30:12.619958   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:30:12.674433   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:30:15.174666   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:30:15.313557   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:30:15.339686   18533 logs.go:274] 0 containers: []
	W0108 13:30:15.339700   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:30:15.339783   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:30:15.362588   18533 logs.go:274] 0 containers: []
	W0108 13:30:15.362599   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:30:15.362680   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:30:15.385505   18533 logs.go:274] 0 containers: []
	W0108 13:30:15.385525   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:30:15.385624   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:30:15.408817   18533 logs.go:274] 0 containers: []
	W0108 13:30:15.408831   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:30:15.408914   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:30:15.431900   18533 logs.go:274] 0 containers: []
	W0108 13:30:15.431914   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:30:15.432000   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:30:15.456193   18533 logs.go:274] 0 containers: []
	W0108 13:30:15.456206   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:30:15.456290   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:30:15.480593   18533 logs.go:274] 0 containers: []
	W0108 13:30:15.480606   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:30:15.480687   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:30:15.504177   18533 logs.go:274] 0 containers: []
	W0108 13:30:15.504191   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:30:15.504198   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:30:15.504205   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:30:15.541204   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:30:15.541219   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:30:15.553555   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:30:15.553584   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:30:15.612025   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:30:15.612037   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:30:15.612043   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:30:15.626579   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:30:15.626593   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:30:17.681400   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054786344s)
	I0108 13:30:20.181702   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:30:20.313082   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:30:20.338650   18533 logs.go:274] 0 containers: []
	W0108 13:30:20.338664   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:30:20.338751   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:30:20.363376   18533 logs.go:274] 0 containers: []
	W0108 13:30:20.363390   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:30:20.363477   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:30:20.418140   18533 logs.go:274] 0 containers: []
	W0108 13:30:20.418161   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:30:20.418293   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:30:20.441970   18533 logs.go:274] 0 containers: []
	W0108 13:30:20.441985   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:30:20.442068   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:30:20.466272   18533 logs.go:274] 0 containers: []
	W0108 13:30:20.466286   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:30:20.466372   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:30:20.490228   18533 logs.go:274] 0 containers: []
	W0108 13:30:20.490241   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:30:20.490323   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:30:20.513351   18533 logs.go:274] 0 containers: []
	W0108 13:30:20.513364   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:30:20.513446   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:30:20.537196   18533 logs.go:274] 0 containers: []
	W0108 13:30:20.537211   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:30:20.537218   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:30:20.537225   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:30:20.576666   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:30:20.576681   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:30:20.588935   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:30:20.588950   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:30:20.645246   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:30:20.645257   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:30:20.645264   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:30:20.659756   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:30:20.659769   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:30:22.711405   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051614822s)
	I0108 13:30:25.212077   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:30:25.314943   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:30:25.341198   18533 logs.go:274] 0 containers: []
	W0108 13:30:25.341215   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:30:25.341297   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:30:25.364835   18533 logs.go:274] 0 containers: []
	W0108 13:30:25.364849   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:30:25.364931   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:30:25.388414   18533 logs.go:274] 0 containers: []
	W0108 13:30:25.388428   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:30:25.388523   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:30:25.411939   18533 logs.go:274] 0 containers: []
	W0108 13:30:25.411953   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:30:25.412039   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:30:25.435116   18533 logs.go:274] 0 containers: []
	W0108 13:30:25.435131   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:30:25.435216   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:30:25.458287   18533 logs.go:274] 0 containers: []
	W0108 13:30:25.458301   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:30:25.458383   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:30:25.484470   18533 logs.go:274] 0 containers: []
	W0108 13:30:25.484487   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:30:25.484566   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:30:25.510511   18533 logs.go:274] 0 containers: []
	W0108 13:30:25.510524   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:30:25.510531   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:30:25.510537   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:30:25.525398   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:30:25.525413   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:30:27.576723   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051289318s)
	I0108 13:30:27.576835   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:30:27.576842   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:30:27.615401   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:30:27.615416   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:30:27.630654   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:30:27.630670   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:30:27.687473   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:30:30.187789   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:30:30.314899   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:30:30.340261   18533 logs.go:274] 0 containers: []
	W0108 13:30:30.340276   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:30:30.340359   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:30:30.364078   18533 logs.go:274] 0 containers: []
	W0108 13:30:30.364092   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:30:30.364174   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:30:30.386712   18533 logs.go:274] 0 containers: []
	W0108 13:30:30.386725   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:30:30.386807   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:30:30.410192   18533 logs.go:274] 0 containers: []
	W0108 13:30:30.410205   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:30:30.410286   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:30:30.433403   18533 logs.go:274] 0 containers: []
	W0108 13:30:30.433417   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:30:30.433503   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:30:30.456151   18533 logs.go:274] 0 containers: []
	W0108 13:30:30.456166   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:30:30.456249   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:30:30.479569   18533 logs.go:274] 0 containers: []
	W0108 13:30:30.479584   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:30:30.479667   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:30:30.502874   18533 logs.go:274] 0 containers: []
	W0108 13:30:30.502888   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:30:30.502896   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:30:30.502904   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:30:30.542262   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:30:30.542279   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:30:30.554668   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:30:30.554682   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:30:30.612056   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:30:30.612068   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:30:30.612074   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:30:30.626633   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:30:30.626646   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:30:32.674141   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047473423s)
	I0108 13:30:35.176547   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:30:35.312890   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:30:35.338824   18533 logs.go:274] 0 containers: []
	W0108 13:30:35.338859   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:30:35.338963   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:30:35.362893   18533 logs.go:274] 0 containers: []
	W0108 13:30:35.362912   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:30:35.363005   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:30:35.388994   18533 logs.go:274] 0 containers: []
	W0108 13:30:35.389008   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:30:35.389092   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:30:35.428944   18533 logs.go:274] 0 containers: []
	W0108 13:30:35.428958   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:30:35.429038   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:30:35.450964   18533 logs.go:274] 0 containers: []
	W0108 13:30:35.450977   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:30:35.451061   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:30:35.474754   18533 logs.go:274] 0 containers: []
	W0108 13:30:35.474771   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:30:35.474863   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:30:35.497369   18533 logs.go:274] 0 containers: []
	W0108 13:30:35.497383   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:30:35.497468   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:30:35.520867   18533 logs.go:274] 0 containers: []
	W0108 13:30:35.520880   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:30:35.520887   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:30:35.520895   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:30:35.559797   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:30:35.559810   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:30:35.573141   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:30:35.573155   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:30:35.635156   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:30:35.635168   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:30:35.635175   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:30:35.650204   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:30:35.650217   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:30:37.701213   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050974992s)
	I0108 13:30:40.202196   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:30:40.314136   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:30:40.340676   18533 logs.go:274] 0 containers: []
	W0108 13:30:40.340691   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:30:40.340776   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:30:40.364582   18533 logs.go:274] 0 containers: []
	W0108 13:30:40.364596   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:30:40.364679   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:30:40.386870   18533 logs.go:274] 0 containers: []
	W0108 13:30:40.386886   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:30:40.386971   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:30:40.408608   18533 logs.go:274] 0 containers: []
	W0108 13:30:40.408627   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:30:40.408714   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:30:40.430702   18533 logs.go:274] 0 containers: []
	W0108 13:30:40.430715   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:30:40.430797   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:30:40.453836   18533 logs.go:274] 0 containers: []
	W0108 13:30:40.453853   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:30:40.453940   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:30:40.477404   18533 logs.go:274] 0 containers: []
	W0108 13:30:40.477418   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:30:40.477503   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:30:40.501355   18533 logs.go:274] 0 containers: []
	W0108 13:30:40.501368   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:30:40.501377   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:30:40.501383   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:30:40.515236   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:30:40.515249   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:30:42.565728   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050457619s)
	I0108 13:30:42.565838   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:30:42.565847   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:30:42.603798   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:30:42.603811   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:30:42.617034   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:30:42.617050   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:30:42.672219   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:30:45.172677   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:30:45.313975   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:30:45.340818   18533 logs.go:274] 0 containers: []
	W0108 13:30:45.340832   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:30:45.340915   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:30:45.364413   18533 logs.go:274] 0 containers: []
	W0108 13:30:45.364427   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:30:45.364509   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:30:45.388110   18533 logs.go:274] 0 containers: []
	W0108 13:30:45.388124   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:30:45.388205   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:30:45.410822   18533 logs.go:274] 0 containers: []
	W0108 13:30:45.410837   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:30:45.410920   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:30:45.433687   18533 logs.go:274] 0 containers: []
	W0108 13:30:45.433700   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:30:45.433785   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:30:45.458574   18533 logs.go:274] 0 containers: []
	W0108 13:30:45.458587   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:30:45.458680   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:30:45.483903   18533 logs.go:274] 0 containers: []
	W0108 13:30:45.483917   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:30:45.484005   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:30:45.507033   18533 logs.go:274] 0 containers: []
	W0108 13:30:45.507047   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:30:45.507055   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:30:45.507062   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:30:47.559126   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052042746s)
	I0108 13:30:47.559237   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:30:47.559244   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:30:47.597813   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:30:47.597828   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:30:47.611163   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:30:47.611179   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:30:47.667491   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:30:47.667508   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:30:47.667517   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:30:50.182122   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:30:50.314946   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:30:50.343803   18533 logs.go:274] 0 containers: []
	W0108 13:30:50.343817   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:30:50.343899   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:30:50.366543   18533 logs.go:274] 0 containers: []
	W0108 13:30:50.366564   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:30:50.366658   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:30:50.418987   18533 logs.go:274] 0 containers: []
	W0108 13:30:50.419001   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:30:50.419087   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:30:50.444250   18533 logs.go:274] 0 containers: []
	W0108 13:30:50.444263   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:30:50.444352   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:30:50.468703   18533 logs.go:274] 0 containers: []
	W0108 13:30:50.468716   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:30:50.468780   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:30:50.492074   18533 logs.go:274] 0 containers: []
	W0108 13:30:50.492088   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:30:50.492172   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:30:50.516318   18533 logs.go:274] 0 containers: []
	W0108 13:30:50.516331   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:30:50.516414   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:30:50.539889   18533 logs.go:274] 0 containers: []
	W0108 13:30:50.539902   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:30:50.539909   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:30:50.539916   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:30:50.579196   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:30:50.579212   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:30:50.591462   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:30:50.591477   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:30:50.647858   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:30:50.647878   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:30:50.647885   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:30:50.662015   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:30:50.662029   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:30:52.712917   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050868276s)
	I0108 13:30:55.213355   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:30:55.313041   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:30:55.337613   18533 logs.go:274] 0 containers: []
	W0108 13:30:55.337629   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:30:55.337709   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:30:55.360466   18533 logs.go:274] 0 containers: []
	W0108 13:30:55.360479   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:30:55.360561   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:30:55.384103   18533 logs.go:274] 0 containers: []
	W0108 13:30:55.384116   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:30:55.384201   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:30:55.406930   18533 logs.go:274] 0 containers: []
	W0108 13:30:55.406944   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:30:55.407026   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:30:55.429627   18533 logs.go:274] 0 containers: []
	W0108 13:30:55.429643   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:30:55.429734   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:30:55.453244   18533 logs.go:274] 0 containers: []
	W0108 13:30:55.453258   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:30:55.453337   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:30:55.477062   18533 logs.go:274] 0 containers: []
	W0108 13:30:55.477077   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:30:55.477180   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:30:55.500513   18533 logs.go:274] 0 containers: []
	W0108 13:30:55.500527   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:30:55.500533   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:30:55.500541   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:30:55.514954   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:30:55.514967   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:30:57.565252   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050266115s)
	I0108 13:30:57.565364   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:30:57.565371   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:30:57.603911   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:30:57.603924   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:30:57.617029   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:30:57.617042   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:30:57.672338   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:31:00.172961   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:31:00.313449   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:31:00.339142   18533 logs.go:274] 0 containers: []
	W0108 13:31:00.339156   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:31:00.339241   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:31:00.362911   18533 logs.go:274] 0 containers: []
	W0108 13:31:00.362925   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:31:00.363007   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:31:00.386170   18533 logs.go:274] 0 containers: []
	W0108 13:31:00.386184   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:31:00.386264   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:31:00.409898   18533 logs.go:274] 0 containers: []
	W0108 13:31:00.409913   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:31:00.410001   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:31:00.434124   18533 logs.go:274] 0 containers: []
	W0108 13:31:00.434137   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:31:00.434217   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:31:00.456382   18533 logs.go:274] 0 containers: []
	W0108 13:31:00.456397   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:31:00.456482   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:31:00.479866   18533 logs.go:274] 0 containers: []
	W0108 13:31:00.479881   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:31:00.479961   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:31:00.503855   18533 logs.go:274] 0 containers: []
	W0108 13:31:00.503870   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:31:00.503877   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:31:00.503884   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:31:00.544091   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:31:00.544105   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:31:00.556739   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:31:00.556759   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:31:00.616106   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:31:00.616120   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:31:00.616127   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:31:00.630289   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:31:00.630301   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:31:02.680549   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050225608s)
	I0108 13:31:05.182780   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:31:05.313274   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:31:05.339818   18533 logs.go:274] 0 containers: []
	W0108 13:31:05.339831   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:31:05.339924   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:31:05.362585   18533 logs.go:274] 0 containers: []
	W0108 13:31:05.362599   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:31:05.362699   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:31:05.388656   18533 logs.go:274] 0 containers: []
	W0108 13:31:05.388675   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:31:05.388870   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:31:05.434661   18533 logs.go:274] 0 containers: []
	W0108 13:31:05.434680   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:31:05.434767   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:31:05.456802   18533 logs.go:274] 0 containers: []
	W0108 13:31:05.456815   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:31:05.456900   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:31:05.480124   18533 logs.go:274] 0 containers: []
	W0108 13:31:05.480139   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:31:05.480223   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:31:05.502536   18533 logs.go:274] 0 containers: []
	W0108 13:31:05.502549   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:31:05.502638   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:31:05.525733   18533 logs.go:274] 0 containers: []
	W0108 13:31:05.525747   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:31:05.525754   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:31:05.525762   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:31:05.565155   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:31:05.565177   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:31:05.578151   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:31:05.578165   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:31:05.635251   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:31:05.635262   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:31:05.635269   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:31:05.649797   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:31:05.649811   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:31:07.700526   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050693654s)
	I0108 13:31:10.200991   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:31:10.313712   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:31:10.338435   18533 logs.go:274] 0 containers: []
	W0108 13:31:10.338449   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:31:10.338539   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:31:10.361138   18533 logs.go:274] 0 containers: []
	W0108 13:31:10.361153   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:31:10.361234   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:31:10.384541   18533 logs.go:274] 0 containers: []
	W0108 13:31:10.384559   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:31:10.384641   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:31:10.406994   18533 logs.go:274] 0 containers: []
	W0108 13:31:10.407009   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:31:10.407093   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:31:10.430987   18533 logs.go:274] 0 containers: []
	W0108 13:31:10.431000   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:31:10.431082   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:31:10.453836   18533 logs.go:274] 0 containers: []
	W0108 13:31:10.453849   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:31:10.453932   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:31:10.477221   18533 logs.go:274] 0 containers: []
	W0108 13:31:10.477234   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:31:10.477320   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:31:10.501667   18533 logs.go:274] 0 containers: []
	W0108 13:31:10.501682   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:31:10.501690   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:31:10.501698   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:31:10.558788   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:31:10.558801   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:31:10.558808   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:31:10.574884   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:31:10.574898   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:31:12.622460   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047540579s)
	I0108 13:31:12.622577   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:31:12.622586   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:31:12.659981   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:31:12.659994   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:31:15.174960   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:31:15.313919   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:31:15.338368   18533 logs.go:274] 0 containers: []
	W0108 13:31:15.338382   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:31:15.338464   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:31:15.361256   18533 logs.go:274] 0 containers: []
	W0108 13:31:15.361270   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:31:15.361356   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:31:15.385621   18533 logs.go:274] 0 containers: []
	W0108 13:31:15.385634   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:31:15.385737   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:31:15.408481   18533 logs.go:274] 0 containers: []
	W0108 13:31:15.408496   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:31:15.408578   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:31:15.430640   18533 logs.go:274] 0 containers: []
	W0108 13:31:15.430652   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:31:15.430733   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:31:15.453544   18533 logs.go:274] 0 containers: []
	W0108 13:31:15.453558   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:31:15.453640   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:31:15.478244   18533 logs.go:274] 0 containers: []
	W0108 13:31:15.478257   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:31:15.478338   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:31:15.502120   18533 logs.go:274] 0 containers: []
	W0108 13:31:15.502133   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:31:15.502140   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:31:15.502147   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:31:15.516471   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:31:15.516485   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:31:17.564974   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048469299s)
	I0108 13:31:17.565087   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:31:17.565095   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:31:17.603653   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:31:17.603666   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:31:17.615910   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:31:17.615925   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:31:17.671979   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:31:20.172793   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:31:20.313494   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:31:20.339313   18533 logs.go:274] 0 containers: []
	W0108 13:31:20.339331   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:31:20.339430   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:31:20.363859   18533 logs.go:274] 0 containers: []
	W0108 13:31:20.363872   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:31:20.363960   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:31:20.418240   18533 logs.go:274] 0 containers: []
	W0108 13:31:20.418260   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:31:20.418393   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:31:20.442753   18533 logs.go:274] 0 containers: []
	W0108 13:31:20.442767   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:31:20.442850   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:31:20.465964   18533 logs.go:274] 0 containers: []
	W0108 13:31:20.465978   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:31:20.466061   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:31:20.489133   18533 logs.go:274] 0 containers: []
	W0108 13:31:20.489146   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:31:20.489233   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:31:20.513571   18533 logs.go:274] 0 containers: []
	W0108 13:31:20.513586   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:31:20.513668   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:31:20.536991   18533 logs.go:274] 0 containers: []
	W0108 13:31:20.537006   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:31:20.537014   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:31:20.537021   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:31:22.586983   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049939243s)
	I0108 13:31:22.587098   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:31:22.587106   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:31:22.625467   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:31:22.625485   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:31:22.638231   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:31:22.638254   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:31:22.693884   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:31:22.693898   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:31:22.693905   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:31:25.209947   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:31:25.313427   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:31:25.339683   18533 logs.go:274] 0 containers: []
	W0108 13:31:25.339703   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:31:25.339796   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:31:25.363154   18533 logs.go:274] 0 containers: []
	W0108 13:31:25.363168   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:31:25.363250   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:31:25.386828   18533 logs.go:274] 0 containers: []
	W0108 13:31:25.386841   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:31:25.386923   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:31:25.409964   18533 logs.go:274] 0 containers: []
	W0108 13:31:25.409977   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:31:25.410057   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:31:25.433791   18533 logs.go:274] 0 containers: []
	W0108 13:31:25.433805   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:31:25.433886   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:31:25.456365   18533 logs.go:274] 0 containers: []
	W0108 13:31:25.456379   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:31:25.456462   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:31:25.481631   18533 logs.go:274] 0 containers: []
	W0108 13:31:25.481645   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:31:25.481741   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:31:25.505364   18533 logs.go:274] 0 containers: []
	W0108 13:31:25.505378   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:31:25.505385   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:31:25.505407   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:31:25.520161   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:31:25.520174   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:31:27.570813   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050617465s)
	I0108 13:31:27.570917   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:31:27.570924   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:31:27.607884   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:31:27.607898   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:31:27.620163   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:31:27.620179   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:31:27.674705   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:31:30.175332   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:31:30.313648   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:31:30.338947   18533 logs.go:274] 0 containers: []
	W0108 13:31:30.338962   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:31:30.339046   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:31:30.362652   18533 logs.go:274] 0 containers: []
	W0108 13:31:30.362665   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:31:30.362748   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:31:30.385370   18533 logs.go:274] 0 containers: []
	W0108 13:31:30.385384   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:31:30.385466   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:31:30.408002   18533 logs.go:274] 0 containers: []
	W0108 13:31:30.408016   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:31:30.408104   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:31:30.430364   18533 logs.go:274] 0 containers: []
	W0108 13:31:30.430380   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:31:30.430461   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:31:30.453185   18533 logs.go:274] 0 containers: []
	W0108 13:31:30.453198   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:31:30.453278   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:31:30.476327   18533 logs.go:274] 0 containers: []
	W0108 13:31:30.476340   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:31:30.476420   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:31:30.500069   18533 logs.go:274] 0 containers: []
	W0108 13:31:30.500082   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:31:30.500089   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:31:30.500096   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:31:30.514766   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:31:30.514780   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:31:32.564726   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049923154s)
	I0108 13:31:32.564850   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:31:32.564858   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:31:32.602573   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:31:32.602587   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:31:32.615162   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:31:32.615176   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:31:32.669679   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:31:35.170181   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:31:35.313506   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:31:35.338666   18533 logs.go:274] 0 containers: []
	W0108 13:31:35.338683   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:31:35.338776   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:31:35.362598   18533 logs.go:274] 0 containers: []
	W0108 13:31:35.362612   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:31:35.362699   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:31:35.421523   18533 logs.go:274] 0 containers: []
	W0108 13:31:35.421537   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:31:35.421623   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:31:35.445446   18533 logs.go:274] 0 containers: []
	W0108 13:31:35.445460   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:31:35.445543   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:31:35.468739   18533 logs.go:274] 0 containers: []
	W0108 13:31:35.468754   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:31:35.468839   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:31:35.493003   18533 logs.go:274] 0 containers: []
	W0108 13:31:35.493016   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:31:35.493099   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:31:35.516947   18533 logs.go:274] 0 containers: []
	W0108 13:31:35.516961   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:31:35.517044   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:31:35.539828   18533 logs.go:274] 0 containers: []
	W0108 13:31:35.539842   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:31:35.539849   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:31:35.539855   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:31:37.590935   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051058455s)
	I0108 13:31:37.591055   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:31:37.591064   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:31:37.629470   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:31:37.629491   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:31:37.642176   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:31:37.642193   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:31:37.697992   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:31:37.698004   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:31:37.698010   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:31:40.212353   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:31:40.314099   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:31:40.339997   18533 logs.go:274] 0 containers: []
	W0108 13:31:40.340012   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:31:40.340091   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:31:40.362961   18533 logs.go:274] 0 containers: []
	W0108 13:31:40.362974   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:31:40.363061   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:31:40.385257   18533 logs.go:274] 0 containers: []
	W0108 13:31:40.385272   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:31:40.385356   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:31:40.407780   18533 logs.go:274] 0 containers: []
	W0108 13:31:40.407796   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:31:40.407876   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:31:40.430728   18533 logs.go:274] 0 containers: []
	W0108 13:31:40.430742   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:31:40.430838   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:31:40.454132   18533 logs.go:274] 0 containers: []
	W0108 13:31:40.454147   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:31:40.454230   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:31:40.477308   18533 logs.go:274] 0 containers: []
	W0108 13:31:40.477322   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:31:40.477410   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:31:40.500494   18533 logs.go:274] 0 containers: []
	W0108 13:31:40.500508   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:31:40.500516   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:31:40.500522   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:31:40.539434   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:31:40.539448   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:31:40.551665   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:31:40.551679   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:31:40.611437   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:31:40.611450   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:31:40.611457   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:31:40.626472   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:31:40.626487   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:31:42.677707   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051199237s)
	I0108 13:31:45.178321   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:31:45.313423   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:31:45.339144   18533 logs.go:274] 0 containers: []
	W0108 13:31:45.339159   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:31:45.339242   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:31:45.363022   18533 logs.go:274] 0 containers: []
	W0108 13:31:45.363036   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:31:45.363121   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:31:45.385848   18533 logs.go:274] 0 containers: []
	W0108 13:31:45.385864   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:31:45.385948   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:31:45.409079   18533 logs.go:274] 0 containers: []
	W0108 13:31:45.409093   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:31:45.409178   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:31:45.432225   18533 logs.go:274] 0 containers: []
	W0108 13:31:45.432239   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:31:45.432318   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:31:45.455400   18533 logs.go:274] 0 containers: []
	W0108 13:31:45.455414   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:31:45.455554   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:31:45.479023   18533 logs.go:274] 0 containers: []
	W0108 13:31:45.479038   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:31:45.479126   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:31:45.503254   18533 logs.go:274] 0 containers: []
	W0108 13:31:45.503267   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:31:45.503275   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:31:45.503284   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:31:47.553643   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050334712s)
	I0108 13:31:47.553756   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:31:47.553794   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:31:47.590904   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:31:47.590921   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:31:47.604022   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:31:47.604037   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:31:47.659114   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:31:47.659127   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:31:47.659133   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:31:50.173841   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:31:50.313169   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:31:50.337830   18533 logs.go:274] 0 containers: []
	W0108 13:31:50.337849   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:31:50.337935   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:31:50.361681   18533 logs.go:274] 0 containers: []
	W0108 13:31:50.361697   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:31:50.361783   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:31:50.384734   18533 logs.go:274] 0 containers: []
	W0108 13:31:50.384748   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:31:50.384837   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:31:50.438921   18533 logs.go:274] 0 containers: []
	W0108 13:31:50.438935   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:31:50.439018   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:31:50.463574   18533 logs.go:274] 0 containers: []
	W0108 13:31:50.463587   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:31:50.463675   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:31:50.486051   18533 logs.go:274] 0 containers: []
	W0108 13:31:50.486066   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:31:50.486151   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:31:50.509457   18533 logs.go:274] 0 containers: []
	W0108 13:31:50.509470   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:31:50.509551   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:31:50.532314   18533 logs.go:274] 0 containers: []
	W0108 13:31:50.532328   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:31:50.532335   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:31:50.532342   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:31:50.572891   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:31:50.572906   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:31:50.585017   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:31:50.585033   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:31:50.640373   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:31:50.640385   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:31:50.640391   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:31:50.655084   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:31:50.655099   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:31:52.704936   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049814426s)
	I0108 13:31:55.205382   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:31:55.313225   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:31:55.338178   18533 logs.go:274] 0 containers: []
	W0108 13:31:55.338192   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:31:55.338281   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:31:55.360654   18533 logs.go:274] 0 containers: []
	W0108 13:31:55.360667   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:31:55.360750   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:31:55.383700   18533 logs.go:274] 0 containers: []
	W0108 13:31:55.383713   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:31:55.383798   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:31:55.406328   18533 logs.go:274] 0 containers: []
	W0108 13:31:55.406342   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:31:55.406423   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:31:55.428453   18533 logs.go:274] 0 containers: []
	W0108 13:31:55.428466   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:31:55.428550   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:31:55.451130   18533 logs.go:274] 0 containers: []
	W0108 13:31:55.451144   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:31:55.451230   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:31:55.475086   18533 logs.go:274] 0 containers: []
	W0108 13:31:55.475100   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:31:55.475196   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:31:55.498419   18533 logs.go:274] 0 containers: []
	W0108 13:31:55.498432   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:31:55.498439   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:31:55.498446   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:31:55.537028   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:31:55.537042   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:31:55.550844   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:31:55.550862   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:31:55.609673   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:31:55.609684   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:31:55.609692   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:31:55.625103   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:31:55.625118   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:31:57.675661   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05051875s)
	I0108 13:32:00.176223   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:32:00.313459   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:32:00.337563   18533 logs.go:274] 0 containers: []
	W0108 13:32:00.337576   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:32:00.337658   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:32:00.361294   18533 logs.go:274] 0 containers: []
	W0108 13:32:00.361308   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:32:00.361392   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:32:00.384461   18533 logs.go:274] 0 containers: []
	W0108 13:32:00.384477   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:32:00.384561   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:32:00.408597   18533 logs.go:274] 0 containers: []
	W0108 13:32:00.408611   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:32:00.408697   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:32:00.430957   18533 logs.go:274] 0 containers: []
	W0108 13:32:00.430971   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:32:00.431053   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:32:00.454400   18533 logs.go:274] 0 containers: []
	W0108 13:32:00.454414   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:32:00.454498   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:32:00.477126   18533 logs.go:274] 0 containers: []
	W0108 13:32:00.477141   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:32:00.477231   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:32:00.500525   18533 logs.go:274] 0 containers: []
	W0108 13:32:00.500538   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:32:00.500547   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:32:00.500554   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:32:00.540413   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:32:00.540427   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:32:00.553050   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:32:00.553084   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:32:00.609187   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:32:00.609199   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:32:00.609206   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:32:00.623048   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:32:00.623062   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:32:02.672101   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049016507s)
	I0108 13:32:05.173476   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:32:05.313273   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:32:05.338665   18533 logs.go:274] 0 containers: []
	W0108 13:32:05.338681   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:32:05.338770   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:32:05.363257   18533 logs.go:274] 0 containers: []
	W0108 13:32:05.363271   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:32:05.363355   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:32:05.419033   18533 logs.go:274] 0 containers: []
	W0108 13:32:05.419053   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:32:05.419192   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:32:05.444309   18533 logs.go:274] 0 containers: []
	W0108 13:32:05.444322   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:32:05.444404   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:32:05.467141   18533 logs.go:274] 0 containers: []
	W0108 13:32:05.467156   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:32:05.467243   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:32:05.489739   18533 logs.go:274] 0 containers: []
	W0108 13:32:05.489752   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:32:05.489834   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:32:05.513017   18533 logs.go:274] 0 containers: []
	W0108 13:32:05.513031   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:32:05.513117   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:32:05.536487   18533 logs.go:274] 0 containers: []
	W0108 13:32:05.536500   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:32:05.536507   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:32:05.536514   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:32:07.587981   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051444502s)
	I0108 13:32:07.588088   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:32:07.588095   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:32:07.627661   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:32:07.627680   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:32:07.640741   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:32:07.640759   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:32:07.697435   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:32:07.697449   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:32:07.697456   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:32:10.212415   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:32:10.315350   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:32:10.339594   18533 logs.go:274] 0 containers: []
	W0108 13:32:10.339612   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:32:10.339708   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:32:10.363376   18533 logs.go:274] 0 containers: []
	W0108 13:32:10.363389   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:32:10.363471   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:32:10.387497   18533 logs.go:274] 0 containers: []
	W0108 13:32:10.387509   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:32:10.387590   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:32:10.411149   18533 logs.go:274] 0 containers: []
	W0108 13:32:10.411164   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:32:10.411261   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:32:10.434394   18533 logs.go:274] 0 containers: []
	W0108 13:32:10.434408   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:32:10.434491   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:32:10.459944   18533 logs.go:274] 0 containers: []
	W0108 13:32:10.459958   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:32:10.460039   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:32:10.486536   18533 logs.go:274] 0 containers: []
	W0108 13:32:10.486550   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:32:10.486649   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:32:10.511146   18533 logs.go:274] 0 containers: []
	W0108 13:32:10.511160   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:32:10.511167   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:32:10.511174   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:32:12.561382   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050183583s)
	I0108 13:32:12.561495   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:32:12.561502   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:32:12.599438   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:32:12.599453   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:32:12.611653   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:32:12.611669   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:32:12.666892   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:32:12.666908   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:32:12.666920   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:32:15.181325   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:32:15.313827   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:32:15.339896   18533 logs.go:274] 0 containers: []
	W0108 13:32:15.339910   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:32:15.340001   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:32:15.363480   18533 logs.go:274] 0 containers: []
	W0108 13:32:15.363496   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:32:15.363578   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:32:15.387082   18533 logs.go:274] 0 containers: []
	W0108 13:32:15.387094   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:32:15.387179   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:32:15.410614   18533 logs.go:274] 0 containers: []
	W0108 13:32:15.410629   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:32:15.410714   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:32:15.434423   18533 logs.go:274] 0 containers: []
	W0108 13:32:15.434437   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:32:15.434520   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:32:15.457941   18533 logs.go:274] 0 containers: []
	W0108 13:32:15.457954   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:32:15.458037   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:32:15.483142   18533 logs.go:274] 0 containers: []
	W0108 13:32:15.483156   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:32:15.483242   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:32:15.505669   18533 logs.go:274] 0 containers: []
	W0108 13:32:15.505682   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:32:15.505689   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:32:15.505696   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:32:15.561482   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:32:15.561496   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:32:15.561503   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:32:15.576414   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:32:15.576429   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:32:17.624724   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048273366s)
	I0108 13:32:17.624837   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:32:17.624845   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:32:17.661756   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:32:17.661769   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:32:20.174658   18533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:32:20.314534   18533 kubeadm.go:631] restartCluster took 4m4.181342633s
	W0108 13:32:20.314687   18533 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0108 13:32:20.314714   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0108 13:32:20.733072   18533 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 13:32:20.742937   18533 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 13:32:20.750737   18533 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 13:32:20.750799   18533 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 13:32:20.758428   18533 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 13:32:20.758458   18533 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 13:32:20.806170   18533 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I0108 13:32:20.806220   18533 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 13:32:21.105029   18533 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 13:32:21.105166   18533 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 13:32:21.105290   18533 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 13:32:21.331168   18533 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 13:32:21.332019   18533 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 13:32:21.338658   18533 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I0108 13:32:21.408700   18533 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 13:32:21.430465   18533 out.go:204]   - Generating certificates and keys ...
	I0108 13:32:21.430568   18533 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 13:32:21.430661   18533 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 13:32:21.430741   18533 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 13:32:21.430836   18533 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0108 13:32:21.430931   18533 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 13:32:21.431008   18533 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0108 13:32:21.431092   18533 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0108 13:32:21.431134   18533 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0108 13:32:21.431198   18533 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 13:32:21.431296   18533 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 13:32:21.431332   18533 kubeadm.go:317] [certs] Using the existing "sa" key
	I0108 13:32:21.431393   18533 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 13:32:21.586856   18533 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 13:32:21.840052   18533 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 13:32:22.023182   18533 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 13:32:22.097611   18533 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 13:32:22.098704   18533 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 13:32:22.121433   18533 out.go:204]   - Booting up control plane ...
	I0108 13:32:22.121607   18533 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 13:32:22.121773   18533 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 13:32:22.121921   18533 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 13:32:22.122057   18533 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 13:32:22.122278   18533 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 13:33:02.108329   18533 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0108 13:33:02.109390   18533 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 13:33:02.109623   18533 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 13:33:07.111054   18533 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 13:33:07.111313   18533 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 13:33:17.111744   18533 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 13:33:17.111917   18533 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 13:33:37.117391   18533 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 13:33:37.117527   18533 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 13:34:17.143147   18533 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 13:34:17.143318   18533 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 13:34:17.143342   18533 kubeadm.go:317] 
	I0108 13:34:17.143399   18533 kubeadm.go:317] Unfortunately, an error has occurred:
	I0108 13:34:17.143436   18533 kubeadm.go:317] 	timed out waiting for the condition
	I0108 13:34:17.143446   18533 kubeadm.go:317] 
	I0108 13:34:17.143479   18533 kubeadm.go:317] This error is likely caused by:
	I0108 13:34:17.143526   18533 kubeadm.go:317] 	- The kubelet is not running
	I0108 13:34:17.143630   18533 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0108 13:34:17.143641   18533 kubeadm.go:317] 
	I0108 13:34:17.143722   18533 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0108 13:34:17.143752   18533 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0108 13:34:17.143783   18533 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0108 13:34:17.143790   18533 kubeadm.go:317] 
	I0108 13:34:17.143881   18533 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0108 13:34:17.143964   18533 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0108 13:34:17.144027   18533 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I0108 13:34:17.144085   18533 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I0108 13:34:17.144145   18533 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0108 13:34:17.144176   18533 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I0108 13:34:17.147644   18533 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0108 13:34:17.147761   18533 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
	I0108 13:34:17.147877   18533 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 13:34:17.147944   18533 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0108 13:34:17.147992   18533 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W0108 13:34:17.148154   18533 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0108 13:34:17.148189   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0108 13:34:17.600906   18533 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 13:34:17.614132   18533 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0108 13:34:17.614205   18533 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 13:34:17.625564   18533 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 13:34:17.625591   18533 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 13:34:17.689522   18533 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I0108 13:34:17.689599   18533 kubeadm.go:317] [preflight] Running pre-flight checks
	I0108 13:34:18.070813   18533 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 13:34:18.070918   18533 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 13:34:18.071002   18533 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 13:34:18.358907   18533 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 13:34:18.361799   18533 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 13:34:18.369387   18533 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I0108 13:34:18.453117   18533 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 13:34:18.494691   18533 out.go:204]   - Generating certificates and keys ...
	I0108 13:34:18.494837   18533 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0108 13:34:18.494968   18533 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0108 13:34:18.495091   18533 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 13:34:18.495191   18533 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0108 13:34:18.495300   18533 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 13:34:18.495375   18533 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0108 13:34:18.495456   18533 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0108 13:34:18.495526   18533 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0108 13:34:18.495635   18533 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 13:34:18.495741   18533 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 13:34:18.495822   18533 kubeadm.go:317] [certs] Using the existing "sa" key
	I0108 13:34:18.495889   18533 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 13:34:18.524834   18533 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 13:34:18.621395   18533 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 13:34:18.957355   18533 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 13:34:19.055697   18533 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 13:34:19.057635   18533 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 13:34:19.101043   18533 out.go:204]   - Booting up control plane ...
	I0108 13:34:19.101146   18533 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 13:34:19.101243   18533 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 13:34:19.101315   18533 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 13:34:19.101390   18533 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 13:34:19.101534   18533 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 13:34:59.069605   18533 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0108 13:34:59.070460   18533 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 13:34:59.070661   18533 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 13:35:04.071084   18533 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 13:35:04.071237   18533 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 13:35:14.073005   18533 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 13:35:14.073160   18533 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 13:35:34.074398   18533 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 13:35:34.074635   18533 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 13:36:14.076816   18533 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 13:36:14.077076   18533 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 13:36:14.077091   18533 kubeadm.go:317] 
	I0108 13:36:14.077135   18533 kubeadm.go:317] Unfortunately, an error has occurred:
	I0108 13:36:14.077189   18533 kubeadm.go:317] 	timed out waiting for the condition
	I0108 13:36:14.077195   18533 kubeadm.go:317] 
	I0108 13:36:14.077229   18533 kubeadm.go:317] This error is likely caused by:
	I0108 13:36:14.077304   18533 kubeadm.go:317] 	- The kubelet is not running
	I0108 13:36:14.077439   18533 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0108 13:36:14.077449   18533 kubeadm.go:317] 
	I0108 13:36:14.077554   18533 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0108 13:36:14.077626   18533 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0108 13:36:14.077683   18533 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0108 13:36:14.077694   18533 kubeadm.go:317] 
	I0108 13:36:14.077797   18533 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0108 13:36:14.077907   18533 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0108 13:36:14.077968   18533 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I0108 13:36:14.078002   18533 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I0108 13:36:14.078057   18533 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0108 13:36:14.078083   18533 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I0108 13:36:14.080159   18533 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0108 13:36:14.080280   18533 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
	I0108 13:36:14.080360   18533 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 13:36:14.080416   18533 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0108 13:36:14.080468   18533 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I0108 13:36:14.080486   18533 kubeadm.go:398] StartCluster complete in 7m57.947691712s
	I0108 13:36:14.080587   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:36:14.104339   18533 logs.go:274] 0 containers: []
	W0108 13:36:14.104354   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:36:14.104437   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:36:14.127858   18533 logs.go:274] 0 containers: []
	W0108 13:36:14.127873   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:36:14.127960   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:36:14.150685   18533 logs.go:274] 0 containers: []
	W0108 13:36:14.150699   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:36:14.150783   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:36:14.172941   18533 logs.go:274] 0 containers: []
	W0108 13:36:14.172956   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:36:14.173036   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:36:14.196028   18533 logs.go:274] 0 containers: []
	W0108 13:36:14.196042   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:36:14.196126   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:36:14.220338   18533 logs.go:274] 0 containers: []
	W0108 13:36:14.220351   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:36:14.220453   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:36:14.244073   18533 logs.go:274] 0 containers: []
	W0108 13:36:14.244087   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:36:14.244177   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:36:14.269231   18533 logs.go:274] 0 containers: []
	W0108 13:36:14.269246   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:36:14.269254   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:36:14.269261   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:36:14.308434   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:36:14.308448   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:36:14.321293   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:36:14.321307   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:36:14.376966   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:36:14.376978   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:36:14.376983   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:36:14.391333   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:36:14.391345   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:36:16.441073   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049706516s)
	W0108 13:36:16.441204   18533 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0108 13:36:16.441221   18533 out.go:239] * 
	* 
	W0108 13:36:16.441365   18533 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0108 13:36:16.441383   18533 out.go:239] * 
	* 
	W0108 13:36:16.442054   18533 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 13:36:16.505828   18533 out.go:177] 
	W0108 13:36:16.548741   18533 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0108 13:36:16.548871   18533 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0108 13:36:16.549022   18533 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0108 13:36:16.590720   18533 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-132223 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-132223
helpers_test.go:235: (dbg) docker inspect old-k8s-version-132223:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f",
	        "Created": "2023-01-08T21:22:34.19825588Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 271181,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-08T21:28:12.59475468Z",
	            "FinishedAt": "2023-01-08T21:28:09.256674953Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f/hostname",
	        "HostsPath": "/var/lib/docker/containers/76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f/hosts",
	        "LogPath": "/var/lib/docker/containers/76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f/76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f-json.log",
	        "Name": "/old-k8s-version-132223",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-132223:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-132223",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0669df26397a17aef68e70fb93bc70433c707e50b0b2a2cf95ad87e2e8d05b77-init/diff:/var/lib/docker/overlay2/cf478f0005761c12f45c53e8731191461bd51878189b802beb3f80527bc3582c/diff:/var/lib/docker/overlay2/50547848ed232979e0349fdf0641681247e43e6ddcd120dbefccdce45eba4793/diff:/var/lib/docker/overlay2/7a8415f97e49b013d35a8b27eaf2a6be470c2a985fcd6de4711cb0018f555a3d/diff:/var/lib/docker/overlay2/435dd0b905de8bd2d6b23782418e6d76b0957f55123fe106e3b62d08c0f3da13/diff:/var/lib/docker/overlay2/70ca2e846954d00d296abfcdcefb0db4959d8ce6650e54b1071b655f7c71c823/diff:/var/lib/docker/overlay2/62715d50ae74531df8ef33be95bc933c79334fbfa0ace0bad5efc678fb43d860/diff:/var/lib/docker/overlay2/857f757c27b37807332ef8a52061b2e02614567dadd8631c9414bcf1e51c7eb6/diff:/var/lib/docker/overlay2/d3d508987063e3e43530c93ff3bb9fc842f7f56e79f9babdb9a3608990dc911e/diff:/var/lib/docker/overlay2/b9307635c9b780f8ea6af04393e82329578be8ced22abd92633ac5912ce752d7/diff:/var/lib/docker/overlay2/ab3124
e34a60bd3d2f554d712f9db28fed57b9030105f996b2a40b6c5c68e6a3/diff:/var/lib/docker/overlay2/2664538922f7cea7eec3238db144935f7380d439e3aaf6611f7f6232515b6c70/diff:/var/lib/docker/overlay2/fcf4ff3c9f738d263ccde0d59a8f0bbbf77d5fe10a37a0b64782c90258c52f05/diff:/var/lib/docker/overlay2/9ebb5fb88ffad88aca62110ea1902a046eb8d27eab4d1b03380f2799a61190e4/diff:/var/lib/docker/overlay2/16c6977d1dcb3aef6968fa378be9d39da565962707fb1c2ebcc08741b3ebabb0/diff:/var/lib/docker/overlay2/4a1a615ba2290b96a2289b3709f9e4e2b7585a7880463549ed90c765c1cf364b/diff:/var/lib/docker/overlay2/8875d4ae4e008b8ed7a6c64b581bc9a7437e20bc59a10db038658c3c3abbd626/diff:/var/lib/docker/overlay2/a92bc2bed5e566a6a12e091f0b6adcc5120ec1a5a04a079614da38b8e08b4f4d/diff:/var/lib/docker/overlay2/507f4a1c4f60a4445244bd4611fbdebeda31c842886f650aff0c93fe1cbf551b/diff:/var/lib/docker/overlay2/4b6f57707d2af391e02b8fbab74a152c38778d850194db7c366c972d607c3683/diff:/var/lib/docker/overlay2/30f07cc70078d1a1064ae4c014017806ca9cab561445ba4999d279d77ab9efd9/diff:/var/lib/d
ocker/overlay2/a7ce66498ad28650a9c447ffdd1776688091a1f96a77ba104690bbd632828084/diff:/var/lib/docker/overlay2/375e879a1c9abf773aadafa9214b4cd6a5fa848c3521ded951069c1ef16d03c8/diff:/var/lib/docker/overlay2/dbf6bd39c4440680d1fb7dcfc66134acd119d818a0da224feea03b15985518ef/diff:/var/lib/docker/overlay2/f5247f50460095d94d94f10c8f29a1106915f3f694a40dbc0ff0a7494ceef2d6/diff:/var/lib/docker/overlay2/eca77ea4b87f19d3e4b6258b307c944a60d8a11e38e520715736d86cfcb0a340/diff:/var/lib/docker/overlay2/af8edadcadb813c9b8bcb395db5b7025128f75336edf043daf159e86115fa2d0/diff:/var/lib/docker/overlay2/82696f404a416ef0c49184f767d3a67d76997ca4b3ab9f2553ab364b9e902189/diff:/var/lib/docker/overlay2/aa5f3a92ab78aa13af6b0e4ca676e887e32b388ad037098956622b2bb2d64653/diff:/var/lib/docker/overlay2/3fd93bd37311284bcd588f06d2e1157fcae183e793e58b9e91af55526752251b/diff:/var/lib/docker/overlay2/5cac080397d4de235a72e46ee68fdd622d9fba1dbd60139a59881df7cb97cdd3/diff:/var/lib/docker/overlay2/1534f7a89f3f0459a57d2264ddb9c4b2e95b9348c6c3fb6839c3f2cd1aa
7009a/diff:/var/lib/docker/overlay2/0fa983ab9147631e9188574a597cbb1ada8bd69b4eff49391c9704d239988f73/diff:/var/lib/docker/overlay2/2ff1f973faf98b7d46648d22c4c0cb73675d5b3f37e6906c457a45823a29fe1e/diff:/var/lib/docker/overlay2/1d56ab53b6c377c5835e50d09effb1a1a727279cb8883e5d4cda8c35b4600695/diff:/var/lib/docker/overlay2/903da5933dc4be1a0f9e38defe40072a669562fc25c401b8b9a02def3b94bec6/diff:/var/lib/docker/overlay2/4be7777ae41ce96ae10877862b8954fa1ee593061f9647f30de2ccdd036bb452/diff:/var/lib/docker/overlay2/ae284268a6cd8a67190129d99bdb6a97d27c88bfe4536cbdf20bc356c6cb5ad4/diff:/var/lib/docker/overlay2/207f47b4e74ecca6010612742ebe5cd0c8363dd1634d58f37b9df57cefc063f2/diff:/var/lib/docker/overlay2/65d59701773a038dc5533dece8ebc52ebf3efc833e94c91c470d1f6593bdf196/diff:/var/lib/docker/overlay2/3ae8859886568a0e539b79f17ace58f390ab402b4428c45188c2587640d73f10/diff:/var/lib/docker/overlay2/bf63d45714e6f77ee9a5cf0fd198e479af953d7ea25a6f1f76633e63bd9b827f/diff:/var/lib/docker/overlay2/ac8c76daac6f3c2d9c8ceee7ed9defe04f1a31
f0271684f4258c0f634ed1fce1/diff:/var/lib/docker/overlay2/1cd45a0f7910466989a7434f8eec249f0e295b686baad0e434a2d34dd6e82a47/diff:/var/lib/docker/overlay2/d72980245e92027e64b68ee0fc086b48f102ea405ffbebfd8220036fdbe805d6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0669df26397a17aef68e70fb93bc70433c707e50b0b2a2cf95ad87e2e8d05b77/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0669df26397a17aef68e70fb93bc70433c707e50b0b2a2cf95ad87e2e8d05b77/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0669df26397a17aef68e70fb93bc70433c707e50b0b2a2cf95ad87e2e8d05b77/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-132223",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-132223/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-132223",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-132223",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-132223",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cccd9a830563cfdf91ce0bdb68c2ca01360d5f5427e33608df2cedf47fdf29aa",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53990"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53991"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53992"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53993"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53994"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/cccd9a830563",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-132223": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "76595a40dec8",
	                        "old-k8s-version-132223"
	                    ],
	                    "NetworkID": "8205ca6e86e721bc270dfbf0384edb3c10ca81d0afb1c6b7756a52514e9f6e59",
	                    "EndpointID": "5469fce4ff9eff554edf16f9ae862fd31a7797080d7ceac6012e86a1c678033f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-132223 -n old-k8s-version-132223
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-132223 -n old-k8s-version-132223: exit status 2 (420.255737ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-132223 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-132223 logs -n 25: (3.619001521s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -p calico-130509 --memory=2048                    | calico-130509          | jenkins | v1.28.0 | 08 Jan 23 13:21 PST | 08 Jan 23 13:26 PST |
	|         | --alsologtostderr --wait=true                     |                        |         |         |                     |                     |
	|         | --wait-timeout=5m --cni=calico                    |                        |         |         |                     |                     |
	|         | --driver=docker                                   |                        |         |         |                     |                     |
	| ssh     | -p cilium-130509 pgrep -a                         | cilium-130509          | jenkins | v1.28.0 | 08 Jan 23 13:22 PST | 08 Jan 23 13:22 PST |
	|         | kubelet                                           |                        |         |         |                     |                     |
	| delete  | -p cilium-130509                                  | cilium-130509          | jenkins | v1.28.0 | 08 Jan 23 13:22 PST | 08 Jan 23 13:22 PST |
	| start   | -p old-k8s-version-132223                         | old-k8s-version-132223 | jenkins | v1.28.0 | 08 Jan 23 13:22 PST |                     |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                        |         |         |                     |                     |
	|         | --kvm-network=default                             |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                        |         |         |                     |                     |
	|         | --keep-context=false                              |                        |         |         |                     |                     |
	|         | --driver=docker                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-132223   | old-k8s-version-132223 | jenkins | v1.28.0 | 08 Jan 23 13:26 PST |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                        |         |         |                     |                     |
	| ssh     | -p calico-130509 pgrep -a                         | calico-130509          | jenkins | v1.28.0 | 08 Jan 23 13:26 PST | 08 Jan 23 13:27 PST |
	|         | kubelet                                           |                        |         |         |                     |                     |
	| delete  | -p calico-130509                                  | calico-130509          | jenkins | v1.28.0 | 08 Jan 23 13:27 PST | 08 Jan 23 13:27 PST |
	| start   | -p no-preload-132717                              | no-preload-132717      | jenkins | v1.28.0 | 08 Jan 23 13:27 PST | 08 Jan 23 13:28 PST |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr                                 |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                        |         |         |                     |                     |
	|         | --driver=docker                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-132223                         | old-k8s-version-132223 | jenkins | v1.28.0 | 08 Jan 23 13:28 PST | 08 Jan 23 13:28 PST |
	|         | --alsologtostderr -v=3                            |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-132223        | old-k8s-version-132223 | jenkins | v1.28.0 | 08 Jan 23 13:28 PST | 08 Jan 23 13:28 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-132223                         | old-k8s-version-132223 | jenkins | v1.28.0 | 08 Jan 23 13:28 PST |                     |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                        |         |         |                     |                     |
	|         | --kvm-network=default                             |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                        |         |         |                     |                     |
	|         | --keep-context=false                              |                        |         |         |                     |                     |
	|         | --driver=docker                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-132717        | no-preload-132717      | jenkins | v1.28.0 | 08 Jan 23 13:28 PST | 08 Jan 23 13:28 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                        |         |         |                     |                     |
	| stop    | -p no-preload-132717                              | no-preload-132717      | jenkins | v1.28.0 | 08 Jan 23 13:28 PST | 08 Jan 23 13:28 PST |
	|         | --alsologtostderr -v=3                            |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-132717             | no-preload-132717      | jenkins | v1.28.0 | 08 Jan 23 13:28 PST | 08 Jan 23 13:28 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-132717                              | no-preload-132717      | jenkins | v1.28.0 | 08 Jan 23 13:28 PST | 08 Jan 23 13:33 PST |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr                                 |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                        |         |         |                     |                     |
	|         | --driver=docker                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                        |         |         |                     |                     |
	| ssh     | -p no-preload-132717 sudo                         | no-preload-132717      | jenkins | v1.28.0 | 08 Jan 23 13:34 PST | 08 Jan 23 13:34 PST |
	|         | crictl images -o json                             |                        |         |         |                     |                     |
	| pause   | -p no-preload-132717                              | no-preload-132717      | jenkins | v1.28.0 | 08 Jan 23 13:34 PST | 08 Jan 23 13:34 PST |
	|         | --alsologtostderr -v=1                            |                        |         |         |                     |                     |
	| unpause | -p no-preload-132717                              | no-preload-132717      | jenkins | v1.28.0 | 08 Jan 23 13:34 PST | 08 Jan 23 13:34 PST |
	|         | --alsologtostderr -v=1                            |                        |         |         |                     |                     |
	| delete  | -p no-preload-132717                              | no-preload-132717      | jenkins | v1.28.0 | 08 Jan 23 13:34 PST | 08 Jan 23 13:34 PST |
	| delete  | -p no-preload-132717                              | no-preload-132717      | jenkins | v1.28.0 | 08 Jan 23 13:34 PST | 08 Jan 23 13:34 PST |
	| start   | -p embed-certs-133414                             | embed-certs-133414     | jenkins | v1.28.0 | 08 Jan 23 13:34 PST | 08 Jan 23 13:34 PST |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-133414       | embed-certs-133414     | jenkins | v1.28.0 | 08 Jan 23 13:35 PST | 08 Jan 23 13:35 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                        |         |         |                     |                     |
	| stop    | -p embed-certs-133414                             | embed-certs-133414     | jenkins | v1.28.0 | 08 Jan 23 13:35 PST | 08 Jan 23 13:35 PST |
	|         | --alsologtostderr -v=3                            |                        |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-133414            | embed-certs-133414     | jenkins | v1.28.0 | 08 Jan 23 13:35 PST | 08 Jan 23 13:35 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p embed-certs-133414                             | embed-certs-133414     | jenkins | v1.28.0 | 08 Jan 23 13:35 PST |                     |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/08 13:35:22
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.19.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 13:35:22.925913   19386 out.go:296] Setting OutFile to fd 1 ...
	I0108 13:35:22.926114   19386 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 13:35:22.926119   19386 out.go:309] Setting ErrFile to fd 2...
	I0108 13:35:22.926123   19386 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 13:35:22.926231   19386 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-2761/.minikube/bin
	I0108 13:35:22.926762   19386 out.go:303] Setting JSON to false
	I0108 13:35:22.946822   19386 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5695,"bootTime":1673208027,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0108 13:35:22.946934   19386 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0108 13:35:22.968544   19386 out.go:177] * [embed-certs-133414] minikube v1.28.0 on Darwin 13.0.1
	I0108 13:35:23.011313   19386 notify.go:220] Checking for updates...
	I0108 13:35:23.032950   19386 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 13:35:23.053993   19386 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2761/kubeconfig
	I0108 13:35:23.075125   19386 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0108 13:35:23.097242   19386 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 13:35:23.119089   19386 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube
	I0108 13:35:23.141243   19386 config.go:180] Loaded profile config "embed-certs-133414": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 13:35:23.141641   19386 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 13:35:23.202333   19386 docker.go:137] docker version: linux-20.10.21
	I0108 13:35:23.202471   19386 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 13:35:23.343534   19386 info.go:266] docker info: {ID:5ZGC:6Z7C:CMVQ:QZDS:ZWTS:M4CQ:373D:RAJX:R4QB:QI3D:27UQ:FR5B Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-08 21:35:23.252439661 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0108 13:35:23.387096   19386 out.go:177] * Using the docker driver based on existing profile
	I0108 13:35:23.408051   19386 start.go:294] selected driver: docker
	I0108 13:35:23.408080   19386 start.go:838] validating driver "docker" against &{Name:embed-certs-133414 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:embed-certs-133414 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mo
untString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 13:35:23.408288   19386 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 13:35:23.412263   19386 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 13:35:23.554495   19386 info.go:266] docker info: {ID:5ZGC:6Z7C:CMVQ:QZDS:ZWTS:M4CQ:373D:RAJX:R4QB:QI3D:27UQ:FR5B Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-08 21:35:23.461612458 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0108 13:35:23.554657   19386 start_flags.go:910] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 13:35:23.554688   19386 cni.go:95] Creating CNI manager for ""
	I0108 13:35:23.554700   19386 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0108 13:35:23.554711   19386 start_flags.go:317] config:
	{Name:embed-certs-133414 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:embed-certs-133414 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 13:35:23.597937   19386 out.go:177] * Starting control plane node embed-certs-133414 in cluster embed-certs-133414
	I0108 13:35:23.619048   19386 cache.go:120] Beginning downloading kic base image for docker with docker
	I0108 13:35:23.640041   19386 out.go:177] * Pulling base image ...
	I0108 13:35:23.682042   19386 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0108 13:35:23.682044   19386 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 13:35:23.682102   19386 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I0108 13:35:23.682116   19386 cache.go:57] Caching tarball of preloaded images
	I0108 13:35:23.682246   19386 preload.go:174] Found /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 13:35:23.682259   19386 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I0108 13:35:23.682852   19386 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/embed-certs-133414/config.json ...
	I0108 13:35:23.737461   19386 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 13:35:23.737479   19386 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 13:35:23.737496   19386 cache.go:193] Successfully downloaded all kic artifacts
	I0108 13:35:23.737561   19386 start.go:364] acquiring machines lock for embed-certs-133414: {Name:mk3466722829ba3011e46de5ec9ddaefe7b1316e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 13:35:23.737647   19386 start.go:368] acquired machines lock for "embed-certs-133414" in 64.862µs
	I0108 13:35:23.737671   19386 start.go:96] Skipping create...Using existing machine configuration
	I0108 13:35:23.737679   19386 fix.go:55] fixHost starting: 
	I0108 13:35:23.737949   19386 cli_runner.go:164] Run: docker container inspect embed-certs-133414 --format={{.State.Status}}
	I0108 13:35:23.795689   19386 fix.go:103] recreateIfNeeded on embed-certs-133414: state=Stopped err=<nil>
	W0108 13:35:23.795715   19386 fix.go:129] unexpected machine state, will restart: <nil>
	I0108 13:35:23.839201   19386 out.go:177] * Restarting existing docker container for "embed-certs-133414" ...
	I0108 13:35:23.860657   19386 cli_runner.go:164] Run: docker start embed-certs-133414
	I0108 13:35:24.193629   19386 cli_runner.go:164] Run: docker container inspect embed-certs-133414 --format={{.State.Status}}
	I0108 13:35:24.255115   19386 kic.go:415] container "embed-certs-133414" state is running.
	I0108 13:35:24.255779   19386 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-133414
	I0108 13:35:24.321425   19386 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/embed-certs-133414/config.json ...
	I0108 13:35:24.321897   19386 machine.go:88] provisioning docker machine ...
	I0108 13:35:24.321935   19386 ubuntu.go:169] provisioning hostname "embed-certs-133414"
	I0108 13:35:24.322037   19386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-133414
	I0108 13:35:24.399842   19386 main.go:134] libmachine: Using SSH client type: native
	I0108 13:35:24.400054   19386 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 54201 <nil> <nil>}
	I0108 13:35:24.400068   19386 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-133414 && echo "embed-certs-133414" | sudo tee /etc/hostname
	I0108 13:35:24.557891   19386 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-133414
	
	I0108 13:35:24.558052   19386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-133414
	I0108 13:35:24.620663   19386 main.go:134] libmachine: Using SSH client type: native
	I0108 13:35:24.620832   19386 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 54201 <nil> <nil>}
	I0108 13:35:24.620845   19386 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-133414' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-133414/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-133414' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 13:35:24.740971   19386 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 13:35:24.740998   19386 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-2761/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-2761/.minikube}
	I0108 13:35:24.741022   19386 ubuntu.go:177] setting up certificates
	I0108 13:35:24.741033   19386 provision.go:83] configureAuth start
	I0108 13:35:24.741143   19386 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-133414
	I0108 13:35:24.804833   19386 provision.go:138] copyHostCerts
	I0108 13:35:24.804944   19386 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem, removing ...
	I0108 13:35:24.804956   19386 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem
	I0108 13:35:24.805064   19386 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem (1082 bytes)
	I0108 13:35:24.805315   19386 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem, removing ...
	I0108 13:35:24.805323   19386 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem
	I0108 13:35:24.805431   19386 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem (1123 bytes)
	I0108 13:35:24.805595   19386 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem, removing ...
	I0108 13:35:24.805601   19386 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem
	I0108 13:35:24.805667   19386 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem (1675 bytes)
	I0108 13:35:24.805799   19386 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem org=jenkins.embed-certs-133414 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-133414]
	I0108 13:35:24.967583   19386 provision.go:172] copyRemoteCerts
	I0108 13:35:24.967661   19386 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 13:35:24.967735   19386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-133414
	I0108 13:35:25.030611   19386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54201 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/embed-certs-133414/id_rsa Username:docker}
	I0108 13:35:25.118838   19386 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 13:35:25.137401   19386 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0108 13:35:25.156698   19386 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 13:35:25.175000   19386 provision.go:86] duration metric: configureAuth took 433.94389ms
	I0108 13:35:25.175015   19386 ubuntu.go:193] setting minikube options for container-runtime
	I0108 13:35:25.175185   19386 config.go:180] Loaded profile config "embed-certs-133414": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 13:35:25.175262   19386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-133414
	I0108 13:35:25.234777   19386 main.go:134] libmachine: Using SSH client type: native
	I0108 13:35:25.234936   19386 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 54201 <nil> <nil>}
	I0108 13:35:25.234944   19386 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0108 13:35:25.354089   19386 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0108 13:35:25.354109   19386 ubuntu.go:71] root file system type: overlay
	I0108 13:35:25.354293   19386 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0108 13:35:25.354398   19386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-133414
	I0108 13:35:25.414782   19386 main.go:134] libmachine: Using SSH client type: native
	I0108 13:35:25.414948   19386 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 54201 <nil> <nil>}
	I0108 13:35:25.414994   19386 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0108 13:35:25.543489   19386 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0108 13:35:25.543593   19386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-133414
	I0108 13:35:25.602264   19386 main.go:134] libmachine: Using SSH client type: native
	I0108 13:35:25.602419   19386 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 54201 <nil> <nil>}
	I0108 13:35:25.602439   19386 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0108 13:35:25.725423   19386 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 13:35:25.725445   19386 machine.go:91] provisioned docker machine in 1.403532467s
	I0108 13:35:25.725455   19386 start.go:300] post-start starting for "embed-certs-133414" (driver="docker")
	I0108 13:35:25.725460   19386 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 13:35:25.725543   19386 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 13:35:25.725609   19386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-133414
	I0108 13:35:25.783118   19386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54201 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/embed-certs-133414/id_rsa Username:docker}
	I0108 13:35:25.870048   19386 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 13:35:25.874098   19386 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 13:35:25.874137   19386 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 13:35:25.874144   19386 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 13:35:25.874151   19386 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 13:35:25.874159   19386 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2761/.minikube/addons for local assets ...
	I0108 13:35:25.874302   19386 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2761/.minikube/files for local assets ...
	I0108 13:35:25.874477   19386 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem -> 40832.pem in /etc/ssl/certs
	I0108 13:35:25.874657   19386 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 13:35:25.882799   19386 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem --> /etc/ssl/certs/40832.pem (1708 bytes)
	I0108 13:35:25.901029   19386 start.go:303] post-start completed in 175.563894ms
	I0108 13:35:25.901128   19386 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 13:35:25.901201   19386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-133414
	I0108 13:35:25.962329   19386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54201 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/embed-certs-133414/id_rsa Username:docker}
	I0108 13:35:26.046537   19386 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 13:35:26.051378   19386 fix.go:57] fixHost completed within 2.313678496s
	I0108 13:35:26.051394   19386 start.go:83] releasing machines lock for "embed-certs-133414", held for 2.313727051s
	I0108 13:35:26.051496   19386 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-133414
	I0108 13:35:26.108242   19386 ssh_runner.go:195] Run: cat /version.json
	I0108 13:35:26.108253   19386 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 13:35:26.108320   19386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-133414
	I0108 13:35:26.108331   19386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-133414
	I0108 13:35:26.204312   19386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54201 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/embed-certs-133414/id_rsa Username:docker}
	I0108 13:35:26.204574   19386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54201 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/embed-certs-133414/id_rsa Username:docker}
	I0108 13:35:26.347901   19386 ssh_runner.go:195] Run: systemctl --version
	I0108 13:35:26.352939   19386 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0108 13:35:26.362653   19386 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0108 13:35:26.362722   19386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 13:35:26.374612   19386 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 13:35:26.387997   19386 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0108 13:35:26.456844   19386 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0108 13:35:26.529435   19386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 13:35:26.601132   19386 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 13:35:26.852417   19386 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0108 13:35:26.920103   19386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 13:35:26.986298   19386 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0108 13:35:26.996249   19386 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0108 13:35:26.996344   19386 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0108 13:35:27.000454   19386 start.go:472] Will wait 60s for crictl version
	I0108 13:35:27.000505   19386 ssh_runner.go:195] Run: sudo crictl version
	I0108 13:35:27.108099   19386 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.21
	RuntimeApiVersion:  1.41.0
	I0108 13:35:27.108199   19386 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 13:35:27.137184   19386 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 13:35:27.212709   19386 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
	I0108 13:35:27.212950   19386 cli_runner.go:164] Run: docker exec -t embed-certs-133414 dig +short host.docker.internal
	I0108 13:35:27.324211   19386 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0108 13:35:27.324352   19386 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0108 13:35:27.328925   19386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 13:35:27.338961   19386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-133414
	I0108 13:35:27.402250   19386 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0108 13:35:27.402343   19386 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 13:35:27.428899   19386 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0108 13:35:27.428918   19386 docker.go:543] Images already preloaded, skipping extraction
	I0108 13:35:27.429015   19386 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 13:35:27.455705   19386 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0108 13:35:27.455729   19386 cache_images.go:84] Images are preloaded, skipping loading
	I0108 13:35:27.455827   19386 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0108 13:35:27.527386   19386 cni.go:95] Creating CNI manager for ""
	I0108 13:35:27.527417   19386 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0108 13:35:27.527452   19386 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 13:35:27.527465   19386 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-133414 NodeName:embed-certs-133414 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 13:35:27.527600   19386 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "embed-certs-133414"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 13:35:27.527723   19386 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=embed-certs-133414 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:embed-certs-133414 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 13:35:27.527802   19386 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0108 13:35:27.535927   19386 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 13:35:27.535993   19386 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 13:35:27.543560   19386 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (480 bytes)
	I0108 13:35:27.556807   19386 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 13:35:27.570514   19386 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2040 bytes)
	I0108 13:35:27.583981   19386 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0108 13:35:27.588436   19386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 13:35:27.598493   19386 certs.go:54] Setting up /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/embed-certs-133414 for IP: 192.168.67.2
	I0108 13:35:27.598627   19386 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.key
	I0108 13:35:27.598687   19386 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.key
	I0108 13:35:27.598783   19386 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/embed-certs-133414/client.key
	I0108 13:35:27.598858   19386 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/embed-certs-133414/apiserver.key.c7fa3a9e
	I0108 13:35:27.598917   19386 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/embed-certs-133414/proxy-client.key
	I0108 13:35:27.599149   19386 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083.pem (1338 bytes)
	W0108 13:35:27.599189   19386 certs.go:384] ignoring /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083_empty.pem, impossibly tiny 0 bytes
	I0108 13:35:27.599202   19386 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 13:35:27.599242   19386 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem (1082 bytes)
	I0108 13:35:27.599280   19386 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem (1123 bytes)
	I0108 13:35:27.599316   19386 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem (1675 bytes)
	I0108 13:35:27.599391   19386 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem (1708 bytes)
	I0108 13:35:27.599990   19386 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/embed-certs-133414/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 13:35:27.617838   19386 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/embed-certs-133414/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 13:35:27.635582   19386 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/embed-certs-133414/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 13:35:27.653718   19386 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/embed-certs-133414/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 13:35:27.671878   19386 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 13:35:27.689836   19386 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 13:35:27.707067   19386 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 13:35:27.724852   19386 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 13:35:27.742533   19386 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem --> /usr/share/ca-certificates/40832.pem (1708 bytes)
	I0108 13:35:27.759916   19386 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 13:35:27.777705   19386 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083.pem --> /usr/share/ca-certificates/4083.pem (1338 bytes)
	I0108 13:35:27.795628   19386 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 13:35:27.808779   19386 ssh_runner.go:195] Run: openssl version
	I0108 13:35:27.814897   19386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/40832.pem && ln -fs /usr/share/ca-certificates/40832.pem /etc/ssl/certs/40832.pem"
	I0108 13:35:27.823363   19386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/40832.pem
	I0108 13:35:27.827441   19386 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:32 /usr/share/ca-certificates/40832.pem
	I0108 13:35:27.827505   19386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/40832.pem
	I0108 13:35:27.833033   19386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/40832.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 13:35:27.840871   19386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 13:35:27.848975   19386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 13:35:27.853245   19386 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:27 /usr/share/ca-certificates/minikubeCA.pem
	I0108 13:35:27.853304   19386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 13:35:27.858866   19386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 13:35:27.867089   19386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4083.pem && ln -fs /usr/share/ca-certificates/4083.pem /etc/ssl/certs/4083.pem"
	I0108 13:35:27.875778   19386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4083.pem
	I0108 13:35:27.879905   19386 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:32 /usr/share/ca-certificates/4083.pem
	I0108 13:35:27.879960   19386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4083.pem
	I0108 13:35:27.885670   19386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4083.pem /etc/ssl/certs/51391683.0"
	I0108 13:35:27.893520   19386 kubeadm.go:396] StartCluster: {Name:embed-certs-133414 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:embed-certs-133414 Namespace:default APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 13:35:27.893676   19386 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 13:35:27.917394   19386 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 13:35:27.925519   19386 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0108 13:35:27.925535   19386 kubeadm.go:627] restartCluster start
	I0108 13:35:27.925597   19386 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 13:35:27.932744   19386 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:35:27.948113   19386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-133414
	I0108 13:35:28.008573   19386 kubeconfig.go:135] verify returned: extract IP: "embed-certs-133414" does not appear in /Users/jenkins/minikube-integration/15565-2761/kubeconfig
	I0108 13:35:28.008765   19386 kubeconfig.go:146] "embed-certs-133414" context is missing from /Users/jenkins/minikube-integration/15565-2761/kubeconfig - will repair!
	I0108 13:35:28.009103   19386 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/kubeconfig: {Name:mk71550ab701dee908d8134473648649a6392238 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:35:28.010276   19386 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 13:35:28.018400   19386 api_server.go:165] Checking apiserver status ...
	I0108 13:35:28.018467   19386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:35:28.028045   19386 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:35:28.228475   19386 api_server.go:165] Checking apiserver status ...
	I0108 13:35:28.228657   19386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:35:28.239607   19386 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:35:28.430161   19386 api_server.go:165] Checking apiserver status ...
	I0108 13:35:28.430337   19386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:35:28.441508   19386 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:35:28.629322   19386 api_server.go:165] Checking apiserver status ...
	I0108 13:35:28.629504   19386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:35:28.640345   19386 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:35:28.828291   19386 api_server.go:165] Checking apiserver status ...
	I0108 13:35:28.828411   19386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:35:28.839414   19386 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:35:29.030170   19386 api_server.go:165] Checking apiserver status ...
	I0108 13:35:29.030414   19386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:35:29.041549   19386 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:35:29.229219   19386 api_server.go:165] Checking apiserver status ...
	I0108 13:35:29.229409   19386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:35:29.240961   19386 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:35:29.429296   19386 api_server.go:165] Checking apiserver status ...
	I0108 13:35:29.429441   19386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:35:29.440409   19386 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:35:29.628461   19386 api_server.go:165] Checking apiserver status ...
	I0108 13:35:29.628550   19386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:35:29.638252   19386 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:35:29.828594   19386 api_server.go:165] Checking apiserver status ...
	I0108 13:35:29.828745   19386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:35:29.839634   19386 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:35:30.030225   19386 api_server.go:165] Checking apiserver status ...
	I0108 13:35:30.030401   19386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:35:30.041878   19386 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:35:30.230062   19386 api_server.go:165] Checking apiserver status ...
	I0108 13:35:30.230218   19386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:35:30.241220   19386 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:35:30.428387   19386 api_server.go:165] Checking apiserver status ...
	I0108 13:35:30.428513   19386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:35:30.438116   19386 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:35:30.628726   19386 api_server.go:165] Checking apiserver status ...
	I0108 13:35:30.628900   19386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:35:30.640055   19386 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:35:30.829175   19386 api_server.go:165] Checking apiserver status ...
	I0108 13:35:30.829310   19386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:35:30.840067   19386 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:35:31.029438   19386 api_server.go:165] Checking apiserver status ...
	I0108 13:35:31.029593   19386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:35:31.040711   19386 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:35:31.040721   19386 api_server.go:165] Checking apiserver status ...
	I0108 13:35:31.040781   19386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:35:31.049241   19386 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:35:31.049253   19386 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0108 13:35:31.049261   19386 kubeadm.go:1114] stopping kube-system containers ...
	I0108 13:35:31.049346   19386 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 13:35:31.074377   19386 docker.go:444] Stopping containers: [2b5a855b27ba f7e63c47a0f4 9e02144ad7c6 5a81d3095105 bd948eb5c17f 10ec5a276aa9 160c822eb8c5 419413b4b63d f8fec6aafd59 05245353ffc9 803418ac51d5 378259ef9f1b 5898d8160214 6da69c91dce9 02c8d020212d b3b14738c69b]
	I0108 13:35:31.074493   19386 ssh_runner.go:195] Run: docker stop 2b5a855b27ba f7e63c47a0f4 9e02144ad7c6 5a81d3095105 bd948eb5c17f 10ec5a276aa9 160c822eb8c5 419413b4b63d f8fec6aafd59 05245353ffc9 803418ac51d5 378259ef9f1b 5898d8160214 6da69c91dce9 02c8d020212d b3b14738c69b
	I0108 13:35:31.098550   19386 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 13:35:31.109807   19386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 13:35:31.118142   19386 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jan  8 21:34 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jan  8 21:34 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2011 Jan  8 21:34 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan  8 21:34 /etc/kubernetes/scheduler.conf
	
	I0108 13:35:31.118252   19386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0108 13:35:31.126933   19386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0108 13:35:31.135261   19386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0108 13:35:31.143554   19386 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:35:31.143638   19386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0108 13:35:31.152058   19386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0108 13:35:31.160450   19386 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:35:31.160525   19386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0108 13:35:31.168224   19386 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 13:35:31.176796   19386 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 13:35:31.176811   19386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 13:35:31.229276   19386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 13:35:31.635379   19386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 13:35:31.767596   19386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 13:35:31.819843   19386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 13:35:31.906705   19386 api_server.go:51] waiting for apiserver process to appear ...
	I0108 13:35:31.906789   19386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:35:32.456889   19386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:35:34.074398   18533 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 13:35:34.074635   18533 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 13:35:32.955158   19386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:35:33.456645   19386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:35:33.468768   19386 api_server.go:71] duration metric: took 1.562057981s to wait for apiserver process to appear ...
	I0108 13:35:33.468790   19386 api_server.go:87] waiting for apiserver healthz status ...
	I0108 13:35:33.468806   19386 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54205/healthz ...
	I0108 13:35:36.399686   19386 api_server.go:278] https://127.0.0.1:54205/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 13:35:36.399706   19386 api_server.go:102] status: https://127.0.0.1:54205/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 13:35:36.900494   19386 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54205/healthz ...
	I0108 13:35:36.906124   19386 api_server.go:278] https://127.0.0.1:54205/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 13:35:36.906143   19386 api_server.go:102] status: https://127.0.0.1:54205/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 13:35:37.400010   19386 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54205/healthz ...
	I0108 13:35:37.405534   19386 api_server.go:278] https://127.0.0.1:54205/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 13:35:37.405554   19386 api_server.go:102] status: https://127.0.0.1:54205/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 13:35:37.899889   19386 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54205/healthz ...
	I0108 13:35:37.906039   19386 api_server.go:278] https://127.0.0.1:54205/healthz returned 200:
	ok
	I0108 13:35:37.913361   19386 api_server.go:140] control plane version: v1.25.3
	I0108 13:35:37.913376   19386 api_server.go:130] duration metric: took 4.444559425s to wait for apiserver health ...
	I0108 13:35:37.913385   19386 cni.go:95] Creating CNI manager for ""
	I0108 13:35:37.913392   19386 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0108 13:35:37.913402   19386 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 13:35:37.922497   19386 system_pods.go:59] 8 kube-system pods found
	I0108 13:35:37.922514   19386 system_pods.go:61] "coredns-565d847f94-79jw8" [a5bb49b1-6bd9-432e-970e-cbcac6e4cff8] Running
	I0108 13:35:37.922518   19386 system_pods.go:61] "etcd-embed-certs-133414" [213bbdc6-9a79-4944-aaed-5de27e3be275] Running
	I0108 13:35:37.922528   19386 system_pods.go:61] "kube-apiserver-embed-certs-133414" [d7043482-8593-40b9-b0aa-0c5e793ec0cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 13:35:37.922535   19386 system_pods.go:61] "kube-controller-manager-embed-certs-133414" [2b609e20-6de2-4269-b817-12c93ac82def] Running
	I0108 13:35:37.922541   19386 system_pods.go:61] "kube-proxy-gb9gk" [ff928ad8-6547-4bc3-8a81-7a00df83db8d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 13:35:37.922545   19386 system_pods.go:61] "kube-scheduler-embed-certs-133414" [5fcee7f9-fd73-4789-8d86-e66e3a2303b2] Running
	I0108 13:35:37.922552   19386 system_pods.go:61] "metrics-server-5c8fd5cf8-hx8h2" [ccc54fc8-5b92-40bd-acc2-4184270c899b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 13:35:37.922557   19386 system_pods.go:61] "storage-provisioner" [24a43875-dc10-4182-9123-5b5fe1714ad1] Running
	I0108 13:35:37.922561   19386 system_pods.go:74] duration metric: took 9.154574ms to wait for pod list to return data ...
	I0108 13:35:37.922568   19386 node_conditions.go:102] verifying NodePressure condition ...
	I0108 13:35:37.951018   19386 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0108 13:35:37.951038   19386 node_conditions.go:123] node cpu capacity is 6
	I0108 13:35:37.951049   19386 node_conditions.go:105] duration metric: took 28.476202ms to run NodePressure ...
	I0108 13:35:37.951064   19386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 13:35:38.286630   19386 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0108 13:35:38.291671   19386 kubeadm.go:778] kubelet initialised
	I0108 13:35:38.291684   19386 kubeadm.go:779] duration metric: took 5.037857ms waiting for restarted kubelet to initialise ...
	I0108 13:35:38.291692   19386 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 13:35:38.355148   19386 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-79jw8" in "kube-system" namespace to be "Ready" ...
	I0108 13:35:38.364724   19386 pod_ready.go:92] pod "coredns-565d847f94-79jw8" in "kube-system" namespace has status "Ready":"True"
	I0108 13:35:38.364736   19386 pod_ready.go:81] duration metric: took 9.572604ms waiting for pod "coredns-565d847f94-79jw8" in "kube-system" namespace to be "Ready" ...
	I0108 13:35:38.364775   19386 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-133414" in "kube-system" namespace to be "Ready" ...
	I0108 13:35:38.372389   19386 pod_ready.go:92] pod "etcd-embed-certs-133414" in "kube-system" namespace has status "Ready":"True"
	I0108 13:35:38.372400   19386 pod_ready.go:81] duration metric: took 7.619422ms waiting for pod "etcd-embed-certs-133414" in "kube-system" namespace to be "Ready" ...
	I0108 13:35:38.372407   19386 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-133414" in "kube-system" namespace to be "Ready" ...
	I0108 13:35:40.386573   19386 pod_ready.go:102] pod "kube-apiserver-embed-certs-133414" in "kube-system" namespace has status "Ready":"False"
	I0108 13:35:42.885257   19386 pod_ready.go:102] pod "kube-apiserver-embed-certs-133414" in "kube-system" namespace has status "Ready":"False"
	I0108 13:35:44.887158   19386 pod_ready.go:102] pod "kube-apiserver-embed-certs-133414" in "kube-system" namespace has status "Ready":"False"
	I0108 13:35:47.388335   19386 pod_ready.go:102] pod "kube-apiserver-embed-certs-133414" in "kube-system" namespace has status "Ready":"False"
	I0108 13:35:49.885577   19386 pod_ready.go:102] pod "kube-apiserver-embed-certs-133414" in "kube-system" namespace has status "Ready":"False"
	I0108 13:35:51.385204   19386 pod_ready.go:92] pod "kube-apiserver-embed-certs-133414" in "kube-system" namespace has status "Ready":"True"
	I0108 13:35:51.385223   19386 pod_ready.go:81] duration metric: took 13.012752068s waiting for pod "kube-apiserver-embed-certs-133414" in "kube-system" namespace to be "Ready" ...
	I0108 13:35:51.385233   19386 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-133414" in "kube-system" namespace to be "Ready" ...
	I0108 13:35:52.398827   19386 pod_ready.go:92] pod "kube-controller-manager-embed-certs-133414" in "kube-system" namespace has status "Ready":"True"
	I0108 13:35:52.398846   19386 pod_ready.go:81] duration metric: took 1.013598036s waiting for pod "kube-controller-manager-embed-certs-133414" in "kube-system" namespace to be "Ready" ...
	I0108 13:35:52.398857   19386 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gb9gk" in "kube-system" namespace to be "Ready" ...
	I0108 13:35:52.403453   19386 pod_ready.go:92] pod "kube-proxy-gb9gk" in "kube-system" namespace has status "Ready":"True"
	I0108 13:35:52.403462   19386 pod_ready.go:81] duration metric: took 4.598679ms waiting for pod "kube-proxy-gb9gk" in "kube-system" namespace to be "Ready" ...
	I0108 13:35:52.403468   19386 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-133414" in "kube-system" namespace to be "Ready" ...
	I0108 13:35:52.407591   19386 pod_ready.go:92] pod "kube-scheduler-embed-certs-133414" in "kube-system" namespace has status "Ready":"True"
	I0108 13:35:52.407600   19386 pod_ready.go:81] duration metric: took 4.126324ms waiting for pod "kube-scheduler-embed-certs-133414" in "kube-system" namespace to be "Ready" ...
	I0108 13:35:52.407609   19386 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c8fd5cf8-hx8h2" in "kube-system" namespace to be "Ready" ...
	I0108 13:35:54.419246   19386 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-hx8h2" in "kube-system" namespace has status "Ready":"False"
	I0108 13:35:56.419983   19386 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-hx8h2" in "kube-system" namespace has status "Ready":"False"
	I0108 13:35:58.420086   19386 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-hx8h2" in "kube-system" namespace has status "Ready":"False"
	I0108 13:36:00.919682   19386 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-hx8h2" in "kube-system" namespace has status "Ready":"False"
	I0108 13:36:02.920012   19386 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-hx8h2" in "kube-system" namespace has status "Ready":"False"
	I0108 13:36:04.920066   19386 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-hx8h2" in "kube-system" namespace has status "Ready":"False"
	I0108 13:36:06.921169   19386 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-hx8h2" in "kube-system" namespace has status "Ready":"False"
	I0108 13:36:08.922185   19386 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-hx8h2" in "kube-system" namespace has status "Ready":"False"
	I0108 13:36:11.421360   19386 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-hx8h2" in "kube-system" namespace has status "Ready":"False"
	I0108 13:36:14.076816   18533 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0108 13:36:14.077076   18533 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0108 13:36:14.077091   18533 kubeadm.go:317] 
	I0108 13:36:14.077135   18533 kubeadm.go:317] Unfortunately, an error has occurred:
	I0108 13:36:14.077189   18533 kubeadm.go:317] 	timed out waiting for the condition
	I0108 13:36:14.077195   18533 kubeadm.go:317] 
	I0108 13:36:14.077229   18533 kubeadm.go:317] This error is likely caused by:
	I0108 13:36:14.077304   18533 kubeadm.go:317] 	- The kubelet is not running
	I0108 13:36:14.077439   18533 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0108 13:36:14.077449   18533 kubeadm.go:317] 
	I0108 13:36:14.077554   18533 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0108 13:36:14.077626   18533 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0108 13:36:14.077683   18533 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0108 13:36:14.077694   18533 kubeadm.go:317] 
	I0108 13:36:14.077797   18533 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0108 13:36:14.077907   18533 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0108 13:36:14.077968   18533 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I0108 13:36:14.078002   18533 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I0108 13:36:14.078057   18533 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0108 13:36:14.078083   18533 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I0108 13:36:14.080159   18533 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0108 13:36:14.080280   18533 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
	I0108 13:36:14.080360   18533 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 13:36:14.080416   18533 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0108 13:36:14.080468   18533 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I0108 13:36:14.080486   18533 kubeadm.go:398] StartCluster complete in 7m57.947691712s
	I0108 13:36:14.080587   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0108 13:36:14.104339   18533 logs.go:274] 0 containers: []
	W0108 13:36:14.104354   18533 logs.go:276] No container was found matching "kube-apiserver"
	I0108 13:36:14.104437   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0108 13:36:14.127858   18533 logs.go:274] 0 containers: []
	W0108 13:36:14.127873   18533 logs.go:276] No container was found matching "etcd"
	I0108 13:36:14.127960   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0108 13:36:14.150685   18533 logs.go:274] 0 containers: []
	W0108 13:36:14.150699   18533 logs.go:276] No container was found matching "coredns"
	I0108 13:36:14.150783   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0108 13:36:14.172941   18533 logs.go:274] 0 containers: []
	W0108 13:36:14.172956   18533 logs.go:276] No container was found matching "kube-scheduler"
	I0108 13:36:14.173036   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0108 13:36:14.196028   18533 logs.go:274] 0 containers: []
	W0108 13:36:14.196042   18533 logs.go:276] No container was found matching "kube-proxy"
	I0108 13:36:14.196126   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0108 13:36:14.220338   18533 logs.go:274] 0 containers: []
	W0108 13:36:14.220351   18533 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0108 13:36:14.220453   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0108 13:36:14.244073   18533 logs.go:274] 0 containers: []
	W0108 13:36:14.244087   18533 logs.go:276] No container was found matching "storage-provisioner"
	I0108 13:36:14.244177   18533 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0108 13:36:14.269231   18533 logs.go:274] 0 containers: []
	W0108 13:36:14.269246   18533 logs.go:276] No container was found matching "kube-controller-manager"
	I0108 13:36:14.269254   18533 logs.go:123] Gathering logs for kubelet ...
	I0108 13:36:14.269261   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 13:36:14.308434   18533 logs.go:123] Gathering logs for dmesg ...
	I0108 13:36:14.308448   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 13:36:14.321293   18533 logs.go:123] Gathering logs for describe nodes ...
	I0108 13:36:14.321307   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0108 13:36:14.376966   18533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0108 13:36:14.376978   18533 logs.go:123] Gathering logs for Docker ...
	I0108 13:36:14.376983   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0108 13:36:14.391333   18533 logs.go:123] Gathering logs for container status ...
	I0108 13:36:14.391345   18533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 13:36:16.441073   18533 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049706516s)
	W0108 13:36:16.441204   18533 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0108 13:36:16.441221   18533 out.go:239] * 
	W0108 13:36:16.441365   18533 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0108 13:36:16.441383   18533 out.go:239] * 
	W0108 13:36:16.442054   18533 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 13:36:16.505828   18533 out.go:177] 
	W0108 13:36:16.548741   18533 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0108 13:36:16.548871   18533 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0108 13:36:16.549022   18533 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0108 13:36:16.590720   18533 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Logs begin at Sun 2023-01-08 21:28:12 UTC, end at Sun 2023-01-08 21:36:18 UTC. --
	Jan 08 21:28:15 old-k8s-version-132223 systemd[1]: Stopping Docker Application Container Engine...
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[128]: time="2023-01-08T21:28:15.083689881Z" level=info msg="Processing signal 'terminated'"
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[128]: time="2023-01-08T21:28:15.084513971Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[128]: time="2023-01-08T21:28:15.084732043Z" level=info msg="Daemon shutdown complete"
	Jan 08 21:28:15 old-k8s-version-132223 systemd[1]: docker.service: Succeeded.
	Jan 08 21:28:15 old-k8s-version-132223 systemd[1]: Stopped Docker Application Container Engine.
	Jan 08 21:28:15 old-k8s-version-132223 systemd[1]: Starting Docker Application Container Engine...
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.137878168Z" level=info msg="Starting up"
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.139628557Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.139673949Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.139695987Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.139707659Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.141213135Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.141257062Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.141279605Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.141290776Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.146303293Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.150605267Z" level=info msg="Loading containers: start."
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.229829971Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.260962319Z" level=info msg="Loading containers: done."
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.269713094Z" level=info msg="Docker daemon" commit=3056208 graphdriver(s)=overlay2 version=20.10.21
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.269774718Z" level=info msg="Daemon has completed initialization"
	Jan 08 21:28:15 old-k8s-version-132223 systemd[1]: Started Docker Application Container Engine.
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.295857338Z" level=info msg="API listen on [::]:2376"
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.298848948Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2023-01-08T21:36:20Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  21:36:20 up  1:35,  0 users,  load average: 0.73, 1.04, 1.35
	Linux old-k8s-version-132223 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sun 2023-01-08 21:28:12 UTC, end at Sun 2023-01-08 21:36:20 UTC. --
	Jan 08 21:36:19 old-k8s-version-132223 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 08 21:36:19 old-k8s-version-132223 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 161.
	Jan 08 21:36:19 old-k8s-version-132223 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 08 21:36:19 old-k8s-version-132223 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 08 21:36:19 old-k8s-version-132223 kubelet[14588]: I0108 21:36:19.932093   14588 server.go:410] Version: v1.16.0
	Jan 08 21:36:19 old-k8s-version-132223 kubelet[14588]: I0108 21:36:19.932482   14588 plugins.go:100] No cloud provider specified.
	Jan 08 21:36:19 old-k8s-version-132223 kubelet[14588]: I0108 21:36:19.932518   14588 server.go:773] Client rotation is on, will bootstrap in background
	Jan 08 21:36:19 old-k8s-version-132223 kubelet[14588]: I0108 21:36:19.934399   14588 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 08 21:36:19 old-k8s-version-132223 kubelet[14588]: W0108 21:36:19.935055   14588 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 08 21:36:19 old-k8s-version-132223 kubelet[14588]: W0108 21:36:19.935122   14588 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 08 21:36:19 old-k8s-version-132223 kubelet[14588]: F0108 21:36:19.935147   14588 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 08 21:36:19 old-k8s-version-132223 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 08 21:36:19 old-k8s-version-132223 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 08 21:36:20 old-k8s-version-132223 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 162.
	Jan 08 21:36:20 old-k8s-version-132223 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 08 21:36:20 old-k8s-version-132223 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 08 21:36:20 old-k8s-version-132223 kubelet[14624]: I0108 21:36:20.688636   14624 server.go:410] Version: v1.16.0
	Jan 08 21:36:20 old-k8s-version-132223 kubelet[14624]: I0108 21:36:20.688844   14624 plugins.go:100] No cloud provider specified.
	Jan 08 21:36:20 old-k8s-version-132223 kubelet[14624]: I0108 21:36:20.688854   14624 server.go:773] Client rotation is on, will bootstrap in background
	Jan 08 21:36:20 old-k8s-version-132223 kubelet[14624]: I0108 21:36:20.690510   14624 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 08 21:36:20 old-k8s-version-132223 kubelet[14624]: W0108 21:36:20.691213   14624 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 08 21:36:20 old-k8s-version-132223 kubelet[14624]: W0108 21:36:20.691286   14624 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 08 21:36:20 old-k8s-version-132223 kubelet[14624]: F0108 21:36:20.691312   14624 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 08 21:36:20 old-k8s-version-132223 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 08 21:36:20 old-k8s-version-132223 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 13:36:20.529252   19507 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-132223 -n old-k8s-version-132223
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-132223 -n old-k8s-version-132223: exit status 2 (406.675682ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-132223" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (490.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (574.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:36:44.466622    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/auto-130508/client.crt: no such file or directory
E0108 13:36:44.908176    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kindnet-130508/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:36:54.953938    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/calico-130509/client.crt: no such file or directory
E0108 13:36:59.132251    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/cilium-130509/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:37:22.644537    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/calico-130509/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:37:55.403519    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/enable-default-cni-130508/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:38:07.959375    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kindnet-130508/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:38:14.385048    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/no-preload-132717/client.crt: no such file or directory
E0108 13:38:14.390825    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/no-preload-132717/client.crt: no such file or directory
E0108 13:38:14.400955    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/no-preload-132717/client.crt: no such file or directory
E0108 13:38:14.421075    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/no-preload-132717/client.crt: no such file or directory
E0108 13:38:14.461368    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/no-preload-132717/client.crt: no such file or directory
E0108 13:38:14.542172    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/no-preload-132717/client.crt: no such file or directory
E0108 13:38:14.702833    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/no-preload-132717/client.crt: no such file or directory
E0108 13:38:15.023897    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/no-preload-132717/client.crt: no such file or directory
E0108 13:38:15.664150    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/no-preload-132717/client.crt: no such file or directory
E0108 13:38:16.944367    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/no-preload-132717/client.crt: no such file or directory
E0108 13:38:19.506604    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/no-preload-132717/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:38:24.626943    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/no-preload-132717/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:38:34.867241    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/no-preload-132717/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:38:55.347967    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/no-preload-132717/client.crt: no such file or directory
E0108 13:38:59.439939    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:39:18.453885    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/enable-default-cni-130508/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:39:36.308271    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/no-preload-132717/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:39:40.753171    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/skaffold-130345/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:39:59.695705    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:40:03.092620    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/bridge-130508/client.crt: no such file or directory
E0108 13:40:10.073980    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubenet-130508/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:40:16.982643    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
E0108 13:40:21.416266    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/auto-130508/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:40:22.492700    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:41:22.752132    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
E0108 13:41:26.143586    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/bridge-130508/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:41:33.122751    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubenet-130508/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:42:43.796967    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/skaffold-130345/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:42:55.403607    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/enable-default-cni-130508/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:43:14.386079    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/no-preload-132717/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:43:22.177461    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/cilium-130509/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:43:42.073629    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/no-preload-132717/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:43:59.441746    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:44:40.754120    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/skaffold-130345/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:44:59.696346    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
E0108 13:45:03.095223    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/bridge-130508/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:45:10.074885    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubenet-130508/client.crt: no such file or directory
E0108 13:45:16.983823    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:45:21.418787    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/auto-130508/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-132223 -n old-k8s-version-132223
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-132223 -n old-k8s-version-132223: exit status 2 (411.813902ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-132223" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-132223
helpers_test.go:235: (dbg) docker inspect old-k8s-version-132223:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f",
	        "Created": "2023-01-08T21:22:34.19825588Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 271181,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-08T21:28:12.59475468Z",
	            "FinishedAt": "2023-01-08T21:28:09.256674953Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f/hostname",
	        "HostsPath": "/var/lib/docker/containers/76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f/hosts",
	        "LogPath": "/var/lib/docker/containers/76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f/76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f-json.log",
	        "Name": "/old-k8s-version-132223",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-132223:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-132223",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0669df26397a17aef68e70fb93bc70433c707e50b0b2a2cf95ad87e2e8d05b77-init/diff:/var/lib/docker/overlay2/cf478f0005761c12f45c53e8731191461bd51878189b802beb3f80527bc3582c/diff:/var/lib/docker/overlay2/50547848ed232979e0349fdf0641681247e43e6ddcd120dbefccdce45eba4793/diff:/var/lib/docker/overlay2/7a8415f97e49b013d35a8b27eaf2a6be470c2a985fcd6de4711cb0018f555a3d/diff:/var/lib/docker/overlay2/435dd0b905de8bd2d6b23782418e6d76b0957f55123fe106e3b62d08c0f3da13/diff:/var/lib/docker/overlay2/70ca2e846954d00d296abfcdcefb0db4959d8ce6650e54b1071b655f7c71c823/diff:/var/lib/docker/overlay2/62715d50ae74531df8ef33be95bc933c79334fbfa0ace0bad5efc678fb43d860/diff:/var/lib/docker/overlay2/857f757c27b37807332ef8a52061b2e02614567dadd8631c9414bcf1e51c7eb6/diff:/var/lib/docker/overlay2/d3d508987063e3e43530c93ff3bb9fc842f7f56e79f9babdb9a3608990dc911e/diff:/var/lib/docker/overlay2/b9307635c9b780f8ea6af04393e82329578be8ced22abd92633ac5912ce752d7/diff:/var/lib/docker/overlay2/ab3124
e34a60bd3d2f554d712f9db28fed57b9030105f996b2a40b6c5c68e6a3/diff:/var/lib/docker/overlay2/2664538922f7cea7eec3238db144935f7380d439e3aaf6611f7f6232515b6c70/diff:/var/lib/docker/overlay2/fcf4ff3c9f738d263ccde0d59a8f0bbbf77d5fe10a37a0b64782c90258c52f05/diff:/var/lib/docker/overlay2/9ebb5fb88ffad88aca62110ea1902a046eb8d27eab4d1b03380f2799a61190e4/diff:/var/lib/docker/overlay2/16c6977d1dcb3aef6968fa378be9d39da565962707fb1c2ebcc08741b3ebabb0/diff:/var/lib/docker/overlay2/4a1a615ba2290b96a2289b3709f9e4e2b7585a7880463549ed90c765c1cf364b/diff:/var/lib/docker/overlay2/8875d4ae4e008b8ed7a6c64b581bc9a7437e20bc59a10db038658c3c3abbd626/diff:/var/lib/docker/overlay2/a92bc2bed5e566a6a12e091f0b6adcc5120ec1a5a04a079614da38b8e08b4f4d/diff:/var/lib/docker/overlay2/507f4a1c4f60a4445244bd4611fbdebeda31c842886f650aff0c93fe1cbf551b/diff:/var/lib/docker/overlay2/4b6f57707d2af391e02b8fbab74a152c38778d850194db7c366c972d607c3683/diff:/var/lib/docker/overlay2/30f07cc70078d1a1064ae4c014017806ca9cab561445ba4999d279d77ab9efd9/diff:/var/lib/d
ocker/overlay2/a7ce66498ad28650a9c447ffdd1776688091a1f96a77ba104690bbd632828084/diff:/var/lib/docker/overlay2/375e879a1c9abf773aadafa9214b4cd6a5fa848c3521ded951069c1ef16d03c8/diff:/var/lib/docker/overlay2/dbf6bd39c4440680d1fb7dcfc66134acd119d818a0da224feea03b15985518ef/diff:/var/lib/docker/overlay2/f5247f50460095d94d94f10c8f29a1106915f3f694a40dbc0ff0a7494ceef2d6/diff:/var/lib/docker/overlay2/eca77ea4b87f19d3e4b6258b307c944a60d8a11e38e520715736d86cfcb0a340/diff:/var/lib/docker/overlay2/af8edadcadb813c9b8bcb395db5b7025128f75336edf043daf159e86115fa2d0/diff:/var/lib/docker/overlay2/82696f404a416ef0c49184f767d3a67d76997ca4b3ab9f2553ab364b9e902189/diff:/var/lib/docker/overlay2/aa5f3a92ab78aa13af6b0e4ca676e887e32b388ad037098956622b2bb2d64653/diff:/var/lib/docker/overlay2/3fd93bd37311284bcd588f06d2e1157fcae183e793e58b9e91af55526752251b/diff:/var/lib/docker/overlay2/5cac080397d4de235a72e46ee68fdd622d9fba1dbd60139a59881df7cb97cdd3/diff:/var/lib/docker/overlay2/1534f7a89f3f0459a57d2264ddb9c4b2e95b9348c6c3fb6839c3f2cd1aa
7009a/diff:/var/lib/docker/overlay2/0fa983ab9147631e9188574a597cbb1ada8bd69b4eff49391c9704d239988f73/diff:/var/lib/docker/overlay2/2ff1f973faf98b7d46648d22c4c0cb73675d5b3f37e6906c457a45823a29fe1e/diff:/var/lib/docker/overlay2/1d56ab53b6c377c5835e50d09effb1a1a727279cb8883e5d4cda8c35b4600695/diff:/var/lib/docker/overlay2/903da5933dc4be1a0f9e38defe40072a669562fc25c401b8b9a02def3b94bec6/diff:/var/lib/docker/overlay2/4be7777ae41ce96ae10877862b8954fa1ee593061f9647f30de2ccdd036bb452/diff:/var/lib/docker/overlay2/ae284268a6cd8a67190129d99bdb6a97d27c88bfe4536cbdf20bc356c6cb5ad4/diff:/var/lib/docker/overlay2/207f47b4e74ecca6010612742ebe5cd0c8363dd1634d58f37b9df57cefc063f2/diff:/var/lib/docker/overlay2/65d59701773a038dc5533dece8ebc52ebf3efc833e94c91c470d1f6593bdf196/diff:/var/lib/docker/overlay2/3ae8859886568a0e539b79f17ace58f390ab402b4428c45188c2587640d73f10/diff:/var/lib/docker/overlay2/bf63d45714e6f77ee9a5cf0fd198e479af953d7ea25a6f1f76633e63bd9b827f/diff:/var/lib/docker/overlay2/ac8c76daac6f3c2d9c8ceee7ed9defe04f1a31
f0271684f4258c0f634ed1fce1/diff:/var/lib/docker/overlay2/1cd45a0f7910466989a7434f8eec249f0e295b686baad0e434a2d34dd6e82a47/diff:/var/lib/docker/overlay2/d72980245e92027e64b68ee0fc086b48f102ea405ffbebfd8220036fdbe805d6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0669df26397a17aef68e70fb93bc70433c707e50b0b2a2cf95ad87e2e8d05b77/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0669df26397a17aef68e70fb93bc70433c707e50b0b2a2cf95ad87e2e8d05b77/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0669df26397a17aef68e70fb93bc70433c707e50b0b2a2cf95ad87e2e8d05b77/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-132223",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-132223/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-132223",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-132223",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-132223",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cccd9a830563cfdf91ce0bdb68c2ca01360d5f5427e33608df2cedf47fdf29aa",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53990"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53991"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53992"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53993"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53994"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/cccd9a830563",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-132223": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "76595a40dec8",
	                        "old-k8s-version-132223"
	                    ],
	                    "NetworkID": "8205ca6e86e721bc270dfbf0384edb3c10ca81d0afb1c6b7756a52514e9f6e59",
	                    "EndpointID": "5469fce4ff9eff554edf16f9ae862fd31a7797080d7ceac6012e86a1c678033f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-132223 -n old-k8s-version-132223
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-132223 -n old-k8s-version-132223: exit status 2 (407.797197ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-132223 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-132223 logs -n 25: (3.453598457s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p no-preload-132717        | no-preload-132717            | jenkins | v1.28.0 | 08 Jan 23 13:28 PST | 08 Jan 23 13:28 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p no-preload-132717                              | no-preload-132717            | jenkins | v1.28.0 | 08 Jan 23 13:28 PST | 08 Jan 23 13:28 PST |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-132717             | no-preload-132717            | jenkins | v1.28.0 | 08 Jan 23 13:28 PST | 08 Jan 23 13:28 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-132717                              | no-preload-132717            | jenkins | v1.28.0 | 08 Jan 23 13:28 PST | 08 Jan 23 13:33 PST |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr                                 |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                              |         |         |                     |                     |
	| ssh     | -p no-preload-132717 sudo                         | no-preload-132717            | jenkins | v1.28.0 | 08 Jan 23 13:34 PST | 08 Jan 23 13:34 PST |
	|         | crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p no-preload-132717                              | no-preload-132717            | jenkins | v1.28.0 | 08 Jan 23 13:34 PST | 08 Jan 23 13:34 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| unpause | -p no-preload-132717                              | no-preload-132717            | jenkins | v1.28.0 | 08 Jan 23 13:34 PST | 08 Jan 23 13:34 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| delete  | -p no-preload-132717                              | no-preload-132717            | jenkins | v1.28.0 | 08 Jan 23 13:34 PST | 08 Jan 23 13:34 PST |
	| delete  | -p no-preload-132717                              | no-preload-132717            | jenkins | v1.28.0 | 08 Jan 23 13:34 PST | 08 Jan 23 13:34 PST |
	| start   | -p embed-certs-133414                             | embed-certs-133414           | jenkins | v1.28.0 | 08 Jan 23 13:34 PST | 08 Jan 23 13:34 PST |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-133414       | embed-certs-133414           | jenkins | v1.28.0 | 08 Jan 23 13:35 PST | 08 Jan 23 13:35 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p embed-certs-133414                             | embed-certs-133414           | jenkins | v1.28.0 | 08 Jan 23 13:35 PST | 08 Jan 23 13:35 PST |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-133414            | embed-certs-133414           | jenkins | v1.28.0 | 08 Jan 23 13:35 PST | 08 Jan 23 13:35 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-133414                             | embed-certs-133414           | jenkins | v1.28.0 | 08 Jan 23 13:35 PST | 08 Jan 23 13:40 PST |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                              |         |         |                     |                     |
	| ssh     | -p embed-certs-133414 sudo                        | embed-certs-133414           | jenkins | v1.28.0 | 08 Jan 23 13:40 PST | 08 Jan 23 13:40 PST |
	|         | crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p embed-certs-133414                             | embed-certs-133414           | jenkins | v1.28.0 | 08 Jan 23 13:40 PST | 08 Jan 23 13:40 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| unpause | -p embed-certs-133414                             | embed-certs-133414           | jenkins | v1.28.0 | 08 Jan 23 13:40 PST | 08 Jan 23 13:40 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| delete  | -p embed-certs-133414                             | embed-certs-133414           | jenkins | v1.28.0 | 08 Jan 23 13:40 PST | 08 Jan 23 13:40 PST |
	| delete  | -p embed-certs-133414                             | embed-certs-133414           | jenkins | v1.28.0 | 08 Jan 23 13:40 PST | 08 Jan 23 13:40 PST |
	| delete  | -p                                                | disable-driver-mounts-134056 | jenkins | v1.28.0 | 08 Jan 23 13:40 PST | 08 Jan 23 13:40 PST |
	|         | disable-driver-mounts-134056                      |                              |         |         |                     |                     |
	| start   | -p                                                | default-k8s-diff-port-134057 | jenkins | v1.28.0 | 08 Jan 23 13:40 PST | 08 Jan 23 13:41 PST |
	|         | default-k8s-diff-port-134057                      |                              |         |         |                     |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                             |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-diff-port-134057 | jenkins | v1.28.0 | 08 Jan 23 13:41 PST | 08 Jan 23 13:41 PST |
	|         | default-k8s-diff-port-134057                      |                              |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p                                                | default-k8s-diff-port-134057 | jenkins | v1.28.0 | 08 Jan 23 13:41 PST | 08 Jan 23 13:42 PST |
	|         | default-k8s-diff-port-134057                      |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-134057  | default-k8s-diff-port-134057 | jenkins | v1.28.0 | 08 Jan 23 13:42 PST | 08 Jan 23 13:42 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                | default-k8s-diff-port-134057 | jenkins | v1.28.0 | 08 Jan 23 13:42 PST |                     |
	|         | default-k8s-diff-port-134057                      |                              |         |         |                     |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                             |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                              |         |         |                     |                     |
	|---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/08 13:42:06
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.19.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 13:42:06.344176   20152 out.go:296] Setting OutFile to fd 1 ...
	I0108 13:42:06.344445   20152 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 13:42:06.344451   20152 out.go:309] Setting ErrFile to fd 2...
	I0108 13:42:06.344455   20152 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 13:42:06.344570   20152 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-2761/.minikube/bin
	I0108 13:42:06.345062   20152 out.go:303] Setting JSON to false
	I0108 13:42:06.363727   20152 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":6099,"bootTime":1673208027,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0108 13:42:06.363825   20152 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0108 13:42:06.385464   20152 out.go:177] * [default-k8s-diff-port-134057] minikube v1.28.0 on Darwin 13.0.1
	I0108 13:42:06.407609   20152 notify.go:220] Checking for updates...
	I0108 13:42:06.428249   20152 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 13:42:06.450663   20152 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2761/kubeconfig
	I0108 13:42:06.472586   20152 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0108 13:42:06.494450   20152 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 13:42:06.515466   20152 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube
	I0108 13:42:06.538231   20152 config.go:180] Loaded profile config "default-k8s-diff-port-134057": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 13:42:06.538915   20152 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 13:42:06.600723   20152 docker.go:137] docker version: linux-20.10.21
	I0108 13:42:06.600852   20152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 13:42:06.743060   20152 info.go:266] docker info: {ID:5ZGC:6Z7C:CMVQ:QZDS:ZWTS:M4CQ:373D:RAJX:R4QB:QI3D:27UQ:FR5B Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-08 21:42:06.650713809 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0108 13:42:06.786870   20152 out.go:177] * Using the docker driver based on existing profile
	I0108 13:42:06.808832   20152 start.go:294] selected driver: docker
	I0108 13:42:06.808861   20152 start.go:838] validating driver "docker" against &{Name:default-k8s-diff-port-134057 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-134057 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 13:42:06.808985   20152 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 13:42:06.812777   20152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 13:42:06.963692   20152 info.go:266] docker info: {ID:5ZGC:6Z7C:CMVQ:QZDS:ZWTS:M4CQ:373D:RAJX:R4QB:QI3D:27UQ:FR5B Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-08 21:42:06.864146769 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0108 13:42:06.963844   20152 start_flags.go:910] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 13:42:06.963861   20152 cni.go:95] Creating CNI manager for ""
	I0108 13:42:06.963871   20152 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0108 13:42:06.963881   20152 start_flags.go:317] config:
	{Name:default-k8s-diff-port-134057 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-134057 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 13:42:06.985882   20152 out.go:177] * Starting control plane node default-k8s-diff-port-134057 in cluster default-k8s-diff-port-134057
	I0108 13:42:07.008578   20152 cache.go:120] Beginning downloading kic base image for docker with docker
	I0108 13:42:07.030418   20152 out.go:177] * Pulling base image ...
	I0108 13:42:07.073541   20152 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0108 13:42:07.073553   20152 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 13:42:07.073605   20152 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I0108 13:42:07.073622   20152 cache.go:57] Caching tarball of preloaded images
	I0108 13:42:07.073791   20152 preload.go:174] Found /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 13:42:07.073805   20152 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I0108 13:42:07.074541   20152 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/default-k8s-diff-port-134057/config.json ...
	I0108 13:42:07.129325   20152 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 13:42:07.129348   20152 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 13:42:07.129367   20152 cache.go:193] Successfully downloaded all kic artifacts
	I0108 13:42:07.129422   20152 start.go:364] acquiring machines lock for default-k8s-diff-port-134057: {Name:mk12c555e2b23481fb5a6be197c4b6467285f7b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 13:42:07.129511   20152 start.go:368] acquired machines lock for "default-k8s-diff-port-134057" in 68.747µs
	I0108 13:42:07.129536   20152 start.go:96] Skipping create...Using existing machine configuration
	I0108 13:42:07.129545   20152 fix.go:55] fixHost starting: 
	I0108 13:42:07.129816   20152 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-134057 --format={{.State.Status}}
	I0108 13:42:07.186887   20152 fix.go:103] recreateIfNeeded on default-k8s-diff-port-134057: state=Stopped err=<nil>
	W0108 13:42:07.186916   20152 fix.go:129] unexpected machine state, will restart: <nil>
	I0108 13:42:07.208899   20152 out.go:177] * Restarting existing docker container for "default-k8s-diff-port-134057" ...
	I0108 13:42:07.230841   20152 cli_runner.go:164] Run: docker start default-k8s-diff-port-134057
	I0108 13:42:07.581084   20152 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-134057 --format={{.State.Status}}
	I0108 13:42:07.647280   20152 kic.go:415] container "default-k8s-diff-port-134057" state is running.
	I0108 13:42:07.648061   20152 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-134057
	I0108 13:42:07.719079   20152 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/default-k8s-diff-port-134057/config.json ...
	I0108 13:42:07.719592   20152 machine.go:88] provisioning docker machine ...
	I0108 13:42:07.719636   20152 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-134057"
	I0108 13:42:07.719721   20152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-134057
	I0108 13:42:07.790619   20152 main.go:134] libmachine: Using SSH client type: native
	I0108 13:42:07.790825   20152 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 54728 <nil> <nil>}
	I0108 13:42:07.790840   20152 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-134057 && echo "default-k8s-diff-port-134057" | sudo tee /etc/hostname
	I0108 13:42:07.962513   20152 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-134057
	
	I0108 13:42:07.962623   20152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-134057
	I0108 13:42:08.025368   20152 main.go:134] libmachine: Using SSH client type: native
	I0108 13:42:08.025531   20152 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 54728 <nil> <nil>}
	I0108 13:42:08.025549   20152 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-134057' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-134057/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-134057' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 13:42:08.144448   20152 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 13:42:08.144472   20152 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-2761/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-2761/.minikube}
	I0108 13:42:08.144494   20152 ubuntu.go:177] setting up certificates
	I0108 13:42:08.144508   20152 provision.go:83] configureAuth start
	I0108 13:42:08.144675   20152 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-134057
	I0108 13:42:08.210014   20152 provision.go:138] copyHostCerts
	I0108 13:42:08.210144   20152 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem, removing ...
	I0108 13:42:08.210157   20152 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem
	I0108 13:42:08.210299   20152 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem (1082 bytes)
	I0108 13:42:08.210552   20152 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem, removing ...
	I0108 13:42:08.210562   20152 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem
	I0108 13:42:08.210638   20152 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem (1123 bytes)
	I0108 13:42:08.210814   20152 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem, removing ...
	I0108 13:42:08.210820   20152 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem
	I0108 13:42:08.210892   20152 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem (1675 bytes)
	I0108 13:42:08.211031   20152 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-134057 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-diff-port-134057]
	I0108 13:42:08.624591   20152 provision.go:172] copyRemoteCerts
	I0108 13:42:08.624665   20152 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 13:42:08.624728   20152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-134057
	I0108 13:42:08.688430   20152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54728 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/default-k8s-diff-port-134057/id_rsa Username:docker}
	I0108 13:42:08.778627   20152 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 13:42:08.796553   20152 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0108 13:42:08.814300   20152 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 13:42:08.831465   20152 provision.go:86] duration metric: configureAuth took 686.938223ms
	I0108 13:42:08.831481   20152 ubuntu.go:193] setting minikube options for container-runtime
	I0108 13:42:08.831650   20152 config.go:180] Loaded profile config "default-k8s-diff-port-134057": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 13:42:08.831731   20152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-134057
	I0108 13:42:08.890747   20152 main.go:134] libmachine: Using SSH client type: native
	I0108 13:42:08.890909   20152 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 54728 <nil> <nil>}
	I0108 13:42:08.890919   20152 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0108 13:42:09.010420   20152 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0108 13:42:09.010462   20152 ubuntu.go:71] root file system type: overlay
	I0108 13:42:09.010679   20152 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0108 13:42:09.010783   20152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-134057
	I0108 13:42:09.069950   20152 main.go:134] libmachine: Using SSH client type: native
	I0108 13:42:09.070114   20152 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 54728 <nil> <nil>}
	I0108 13:42:09.070167   20152 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0108 13:42:09.196624   20152 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0108 13:42:09.196754   20152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-134057
	I0108 13:42:09.256827   20152 main.go:134] libmachine: Using SSH client type: native
	I0108 13:42:09.256994   20152 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 54728 <nil> <nil>}
	I0108 13:42:09.257007   20152 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0108 13:42:09.378261   20152 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 13:42:09.378277   20152 machine.go:91] provisioned docker machine in 1.658670228s
	I0108 13:42:09.378287   20152 start.go:300] post-start starting for "default-k8s-diff-port-134057" (driver="docker")
	I0108 13:42:09.378292   20152 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 13:42:09.378356   20152 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 13:42:09.378417   20152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-134057
	I0108 13:42:09.438659   20152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54728 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/default-k8s-diff-port-134057/id_rsa Username:docker}
	I0108 13:42:09.525850   20152 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 13:42:09.529404   20152 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 13:42:09.529420   20152 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 13:42:09.529428   20152 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 13:42:09.529433   20152 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 13:42:09.529441   20152 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2761/.minikube/addons for local assets ...
	I0108 13:42:09.529534   20152 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2761/.minikube/files for local assets ...
	I0108 13:42:09.529711   20152 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem -> 40832.pem in /etc/ssl/certs
	I0108 13:42:09.529901   20152 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 13:42:09.537401   20152 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem --> /etc/ssl/certs/40832.pem (1708 bytes)
	I0108 13:42:09.554447   20152 start.go:303] post-start completed in 176.150219ms
	I0108 13:42:09.554542   20152 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 13:42:09.554610   20152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-134057
	I0108 13:42:09.613494   20152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54728 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/default-k8s-diff-port-134057/id_rsa Username:docker}
	I0108 13:42:09.697893   20152 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 13:42:09.702679   20152 fix.go:57] fixHost completed within 2.573123918s
	I0108 13:42:09.702690   20152 start.go:83] releasing machines lock for "default-k8s-diff-port-134057", held for 2.573161982s
	I0108 13:42:09.702782   20152 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-134057
	I0108 13:42:09.762729   20152 ssh_runner.go:195] Run: cat /version.json
	I0108 13:42:09.762750   20152 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 13:42:09.762812   20152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-134057
	I0108 13:42:09.762830   20152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-134057
	I0108 13:42:09.828330   20152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54728 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/default-k8s-diff-port-134057/id_rsa Username:docker}
	I0108 13:42:09.828603   20152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54728 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/default-k8s-diff-port-134057/id_rsa Username:docker}
	I0108 13:42:09.912291   20152 ssh_runner.go:195] Run: systemctl --version
	I0108 13:42:09.971897   20152 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0108 13:42:09.982634   20152 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0108 13:42:09.982708   20152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 13:42:09.995035   20152 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 13:42:10.008210   20152 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0108 13:42:10.075392   20152 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0108 13:42:10.145821   20152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 13:42:10.215940   20152 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 13:42:10.462686   20152 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0108 13:42:10.528913   20152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 13:42:10.597333   20152 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0108 13:42:10.608580   20152 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0108 13:42:10.608719   20152 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0108 13:42:10.613425   20152 start.go:472] Will wait 60s for crictl version
	I0108 13:42:10.613506   20152 ssh_runner.go:195] Run: sudo crictl version
	I0108 13:42:10.729649   20152 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.21
	RuntimeApiVersion:  1.41.0
	I0108 13:42:10.729750   20152 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 13:42:10.757938   20152 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 13:42:10.833018   20152 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
	I0108 13:42:10.833269   20152 cli_runner.go:164] Run: docker exec -t default-k8s-diff-port-134057 dig +short host.docker.internal
	I0108 13:42:10.950874   20152 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0108 13:42:10.951017   20152 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0108 13:42:10.955513   20152 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 13:42:10.965643   20152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-134057
	I0108 13:42:11.025292   20152 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0108 13:42:11.025378   20152 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 13:42:11.049636   20152 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0108 13:42:11.049663   20152 docker.go:543] Images already preloaded, skipping extraction
	I0108 13:42:11.049757   20152 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 13:42:11.073976   20152 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0108 13:42:11.073999   20152 cache_images.go:84] Images are preloaded, skipping loading
	I0108 13:42:11.074096   20152 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0108 13:42:11.145992   20152 cni.go:95] Creating CNI manager for ""
	I0108 13:42:11.146011   20152 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0108 13:42:11.146025   20152 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 13:42:11.146039   20152 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-134057 NodeName:default-k8s-diff-port-134057 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 13:42:11.146153   20152 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "default-k8s-diff-port-134057"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 13:42:11.146230   20152 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=default-k8s-diff-port-134057 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-134057 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0108 13:42:11.146305   20152 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0108 13:42:11.154419   20152 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 13:42:11.154486   20152 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 13:42:11.162169   20152 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (490 bytes)
	I0108 13:42:11.175880   20152 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 13:42:11.189129   20152 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2050 bytes)
	I0108 13:42:11.202898   20152 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0108 13:42:11.207129   20152 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 13:42:11.217029   20152 certs.go:54] Setting up /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/default-k8s-diff-port-134057 for IP: 192.168.67.2
	I0108 13:42:11.217234   20152 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.key
	I0108 13:42:11.217318   20152 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.key
	I0108 13:42:11.217448   20152 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/default-k8s-diff-port-134057/client.key
	I0108 13:42:11.217516   20152 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/default-k8s-diff-port-134057/apiserver.key.c7fa3a9e
	I0108 13:42:11.217577   20152 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/default-k8s-diff-port-134057/proxy-client.key
	I0108 13:42:11.217821   20152 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083.pem (1338 bytes)
	W0108 13:42:11.217868   20152 certs.go:384] ignoring /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083_empty.pem, impossibly tiny 0 bytes
	I0108 13:42:11.217883   20152 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 13:42:11.217926   20152 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem (1082 bytes)
	I0108 13:42:11.217962   20152 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem (1123 bytes)
	I0108 13:42:11.218000   20152 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem (1675 bytes)
	I0108 13:42:11.218083   20152 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem (1708 bytes)
	I0108 13:42:11.218648   20152 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/default-k8s-diff-port-134057/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 13:42:11.236535   20152 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/default-k8s-diff-port-134057/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 13:42:11.254547   20152 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/default-k8s-diff-port-134057/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 13:42:11.271942   20152 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/default-k8s-diff-port-134057/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 13:42:11.289880   20152 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 13:42:11.307125   20152 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 13:42:11.325327   20152 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 13:42:11.342977   20152 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 13:42:11.361791   20152 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 13:42:11.385840   20152 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083.pem --> /usr/share/ca-certificates/4083.pem (1338 bytes)
	I0108 13:42:11.405828   20152 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem --> /usr/share/ca-certificates/40832.pem (1708 bytes)
	I0108 13:42:11.425081   20152 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 13:42:11.439513   20152 ssh_runner.go:195] Run: openssl version
	I0108 13:42:11.445079   20152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 13:42:11.453567   20152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 13:42:11.458165   20152 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:27 /usr/share/ca-certificates/minikubeCA.pem
	I0108 13:42:11.458222   20152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 13:42:11.463983   20152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 13:42:11.471937   20152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4083.pem && ln -fs /usr/share/ca-certificates/4083.pem /etc/ssl/certs/4083.pem"
	I0108 13:42:11.480649   20152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4083.pem
	I0108 13:42:11.485064   20152 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:32 /usr/share/ca-certificates/4083.pem
	I0108 13:42:11.485116   20152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4083.pem
	I0108 13:42:11.490593   20152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4083.pem /etc/ssl/certs/51391683.0"
	I0108 13:42:11.498411   20152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/40832.pem && ln -fs /usr/share/ca-certificates/40832.pem /etc/ssl/certs/40832.pem"
	I0108 13:42:11.506930   20152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/40832.pem
	I0108 13:42:11.510890   20152 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:32 /usr/share/ca-certificates/40832.pem
	I0108 13:42:11.510946   20152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/40832.pem
	I0108 13:42:11.516531   20152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/40832.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 13:42:11.524140   20152 kubeadm.go:396] StartCluster: {Name:default-k8s-diff-port-134057 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-134057 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false M
ountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 13:42:11.524299   20152 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 13:42:11.547826   20152 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 13:42:11.555896   20152 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0108 13:42:11.555914   20152 kubeadm.go:627] restartCluster start
	I0108 13:42:11.555972   20152 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 13:42:11.563090   20152 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:42:11.563181   20152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-134057
	I0108 13:42:11.622996   20152 kubeconfig.go:135] verify returned: extract IP: "default-k8s-diff-port-134057" does not appear in /Users/jenkins/minikube-integration/15565-2761/kubeconfig
	I0108 13:42:11.623159   20152 kubeconfig.go:146] "default-k8s-diff-port-134057" context is missing from /Users/jenkins/minikube-integration/15565-2761/kubeconfig - will repair!
	I0108 13:42:11.623526   20152 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/kubeconfig: {Name:mk71550ab701dee908d8134473648649a6392238 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:42:11.624901   20152 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 13:42:11.632803   20152 api_server.go:165] Checking apiserver status ...
	I0108 13:42:11.632864   20152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:42:11.642183   20152 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:42:11.844297   20152 api_server.go:165] Checking apiserver status ...
	I0108 13:42:11.844468   20152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:42:11.855857   20152 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:42:12.044312   20152 api_server.go:165] Checking apiserver status ...
	I0108 13:42:12.044484   20152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:42:12.055485   20152 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:42:12.243543   20152 api_server.go:165] Checking apiserver status ...
	I0108 13:42:12.243746   20152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:42:12.254845   20152 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:42:12.443127   20152 api_server.go:165] Checking apiserver status ...
	I0108 13:42:12.443259   20152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:42:12.454271   20152 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:42:12.642304   20152 api_server.go:165] Checking apiserver status ...
	I0108 13:42:12.642461   20152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:42:12.653526   20152 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:42:12.843799   20152 api_server.go:165] Checking apiserver status ...
	I0108 13:42:12.843927   20152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:42:12.854669   20152 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:42:13.042402   20152 api_server.go:165] Checking apiserver status ...
	I0108 13:42:13.042595   20152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:42:13.053772   20152 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:42:13.243265   20152 api_server.go:165] Checking apiserver status ...
	I0108 13:42:13.243429   20152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:42:13.254403   20152 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:42:13.442494   20152 api_server.go:165] Checking apiserver status ...
	I0108 13:42:13.442650   20152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:42:13.453776   20152 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:42:13.642358   20152 api_server.go:165] Checking apiserver status ...
	I0108 13:42:13.642441   20152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:42:13.652398   20152 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:42:13.842717   20152 api_server.go:165] Checking apiserver status ...
	I0108 13:42:13.842904   20152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:42:13.853882   20152 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:42:14.044329   20152 api_server.go:165] Checking apiserver status ...
	I0108 13:42:14.044535   20152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:42:14.055717   20152 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:42:14.243440   20152 api_server.go:165] Checking apiserver status ...
	I0108 13:42:14.243580   20152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:42:14.254896   20152 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:42:14.443085   20152 api_server.go:165] Checking apiserver status ...
	I0108 13:42:14.443165   20152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:42:14.452181   20152 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:42:14.644334   20152 api_server.go:165] Checking apiserver status ...
	I0108 13:42:14.644537   20152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:42:14.655387   20152 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:42:14.655397   20152 api_server.go:165] Checking apiserver status ...
	I0108 13:42:14.655454   20152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:42:14.663660   20152 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:42:14.663672   20152 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0108 13:42:14.663681   20152 kubeadm.go:1114] stopping kube-system containers ...
	I0108 13:42:14.663763   20152 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 13:42:14.688158   20152 docker.go:444] Stopping containers: [c3556d34194d 0fe712483355 d55b43f75036 fadc55cd3c1f 2dc4f720db17 36688d5efed2 cb9f17af638c 2c3d2e2834a6 0f75fdd4d0b8 77f1935ac8de 89a2e51aabc0 2a33cb0e0ecd b5cef5628c38 59d92392045b f75dc984fc1f e747ece79ffb]
	I0108 13:42:14.688262   20152 ssh_runner.go:195] Run: docker stop c3556d34194d 0fe712483355 d55b43f75036 fadc55cd3c1f 2dc4f720db17 36688d5efed2 cb9f17af638c 2c3d2e2834a6 0f75fdd4d0b8 77f1935ac8de 89a2e51aabc0 2a33cb0e0ecd b5cef5628c38 59d92392045b f75dc984fc1f e747ece79ffb
	I0108 13:42:14.712885   20152 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 13:42:14.723439   20152 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 13:42:14.731268   20152 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jan  8 21:41 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jan  8 21:41 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2051 Jan  8 21:41 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jan  8 21:41 /etc/kubernetes/scheduler.conf
	
	I0108 13:42:14.731331   20152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0108 13:42:14.738819   20152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0108 13:42:14.746236   20152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0108 13:42:14.753538   20152 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:42:14.753595   20152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0108 13:42:14.760593   20152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0108 13:42:14.768042   20152 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:42:14.768097   20152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0108 13:42:14.775391   20152 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 13:42:14.783072   20152 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 13:42:14.783086   20152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 13:42:14.834184   20152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 13:42:15.466519   20152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 13:42:15.598682   20152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 13:42:15.657435   20152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 13:42:15.761533   20152 api_server.go:51] waiting for apiserver process to appear ...
	I0108 13:42:15.761623   20152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:42:16.273799   20152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:42:16.773752   20152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:42:17.274405   20152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:42:17.286801   20152 api_server.go:71] duration metric: took 1.525262802s to wait for apiserver process to appear ...
	I0108 13:42:17.286819   20152 api_server.go:87] waiting for apiserver healthz status ...
	I0108 13:42:17.286832   20152 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54727/healthz ...
	I0108 13:42:19.742570   20152 api_server.go:278] https://127.0.0.1:54727/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 13:42:19.742594   20152 api_server.go:102] status: https://127.0.0.1:54727/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 13:42:20.243978   20152 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54727/healthz ...
	I0108 13:42:20.252127   20152 api_server.go:278] https://127.0.0.1:54727/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 13:42:20.252144   20152 api_server.go:102] status: https://127.0.0.1:54727/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 13:42:20.742795   20152 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54727/healthz ...
	I0108 13:42:20.748563   20152 api_server.go:278] https://127.0.0.1:54727/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 13:42:20.748583   20152 api_server.go:102] status: https://127.0.0.1:54727/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 13:42:21.244864   20152 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54727/healthz ...
	I0108 13:42:21.252372   20152 api_server.go:278] https://127.0.0.1:54727/healthz returned 200:
	ok
	I0108 13:42:21.258686   20152 api_server.go:140] control plane version: v1.25.3
	I0108 13:42:21.258697   20152 api_server.go:130] duration metric: took 3.971857118s to wait for apiserver health ...
	I0108 13:42:21.258705   20152 cni.go:95] Creating CNI manager for ""
	I0108 13:42:21.258712   20152 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0108 13:42:21.258722   20152 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 13:42:21.265772   20152 system_pods.go:59] 8 kube-system pods found
	I0108 13:42:21.265785   20152 system_pods.go:61] "coredns-565d847f94-fq4f5" [9b0e8296-f7cc-40bb-b7c6-f025f1c4939d] Running
	I0108 13:42:21.265790   20152 system_pods.go:61] "etcd-default-k8s-diff-port-134057" [0e968bff-a5c1-4a21-bcf3-5483e7e5d893] Running
	I0108 13:42:21.265796   20152 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-134057" [fd9bb158-bd85-4f3e-acb7-41c405313c4f] Running
	I0108 13:42:21.265803   20152 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-134057" [c8527407-668f-44cd-aea7-6e0d6d64d568] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 13:42:21.265808   20152 system_pods.go:61] "kube-proxy-5kmzf" [6c3b37f7-c7f3-4aa3-8cdb-c95fdb0312ea] Running
	I0108 13:42:21.265812   20152 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-134057" [ed28ef8a-885f-4f14-8920-861cd3e54333] Running
	I0108 13:42:21.265818   20152 system_pods.go:61] "metrics-server-5c8fd5cf8-w6p2j" [67f5b762-6e6b-44ae-8be6-c3ea5d059e05] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 13:42:21.265823   20152 system_pods.go:61] "storage-provisioner" [99719c30-03ce-4255-b1a1-ac6400743fcf] Running
	I0108 13:42:21.265827   20152 system_pods.go:74] duration metric: took 7.100727ms to wait for pod list to return data ...
	I0108 13:42:21.265832   20152 node_conditions.go:102] verifying NodePressure condition ...
	I0108 13:42:21.268784   20152 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0108 13:42:21.268797   20152 node_conditions.go:123] node cpu capacity is 6
	I0108 13:42:21.268806   20152 node_conditions.go:105] duration metric: took 2.971632ms to run NodePressure ...
	I0108 13:42:21.268817   20152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 13:42:21.435939   20152 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0108 13:42:21.440362   20152 kubeadm.go:778] kubelet initialised
	I0108 13:42:21.440374   20152 kubeadm.go:779] duration metric: took 4.418357ms waiting for restarted kubelet to initialise ...
	I0108 13:42:21.440382   20152 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 13:42:21.446252   20152 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-fq4f5" in "kube-system" namespace to be "Ready" ...
	I0108 13:42:21.454079   20152 pod_ready.go:92] pod "coredns-565d847f94-fq4f5" in "kube-system" namespace has status "Ready":"True"
	I0108 13:42:21.454089   20152 pod_ready.go:81] duration metric: took 7.825384ms waiting for pod "coredns-565d847f94-fq4f5" in "kube-system" namespace to be "Ready" ...
	I0108 13:42:21.454096   20152 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-134057" in "kube-system" namespace to be "Ready" ...
	I0108 13:42:21.458977   20152 pod_ready.go:92] pod "etcd-default-k8s-diff-port-134057" in "kube-system" namespace has status "Ready":"True"
	I0108 13:42:21.458985   20152 pod_ready.go:81] duration metric: took 4.884999ms waiting for pod "etcd-default-k8s-diff-port-134057" in "kube-system" namespace to be "Ready" ...
	I0108 13:42:21.458992   20152 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-134057" in "kube-system" namespace to be "Ready" ...
	I0108 13:42:21.463911   20152 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-134057" in "kube-system" namespace has status "Ready":"True"
	I0108 13:42:21.463920   20152 pod_ready.go:81] duration metric: took 4.917717ms waiting for pod "kube-apiserver-default-k8s-diff-port-134057" in "kube-system" namespace to be "Ready" ...
	I0108 13:42:21.463929   20152 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-134057" in "kube-system" namespace to be "Ready" ...
	I0108 13:42:23.669677   20152 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-134057" in "kube-system" namespace has status "Ready":"False"
	I0108 13:42:26.171285   20152 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-134057" in "kube-system" namespace has status "Ready":"False"
	I0108 13:42:28.170919   20152 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-134057" in "kube-system" namespace has status "Ready":"True"
	I0108 13:42:28.170932   20152 pod_ready.go:81] duration metric: took 6.706965516s waiting for pod "kube-controller-manager-default-k8s-diff-port-134057" in "kube-system" namespace to be "Ready" ...
	I0108 13:42:28.170941   20152 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5kmzf" in "kube-system" namespace to be "Ready" ...
	I0108 13:42:28.175220   20152 pod_ready.go:92] pod "kube-proxy-5kmzf" in "kube-system" namespace has status "Ready":"True"
	I0108 13:42:28.175228   20152 pod_ready.go:81] duration metric: took 4.282268ms waiting for pod "kube-proxy-5kmzf" in "kube-system" namespace to be "Ready" ...
	I0108 13:42:28.175234   20152 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-134057" in "kube-system" namespace to be "Ready" ...
	I0108 13:42:28.685613   20152 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-134057" in "kube-system" namespace has status "Ready":"True"
	I0108 13:42:28.685630   20152 pod_ready.go:81] duration metric: took 510.388529ms waiting for pod "kube-scheduler-default-k8s-diff-port-134057" in "kube-system" namespace to be "Ready" ...
	I0108 13:42:28.685638   20152 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace to be "Ready" ...
	I0108 13:42:30.699764   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:42:33.198010   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:42:35.199358   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:42:37.201554   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:42:39.701505   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:42:41.701727   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:42:44.199150   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:42:46.235043   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:42:48.700261   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:42:50.701551   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:42:53.201139   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:42:55.698193   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:42:57.698859   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:42:59.701434   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:43:02.199888   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:43:04.200958   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:43:06.699818   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:43:08.701412   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:43:11.201468   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:43:13.699084   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:43:15.700181   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:43:18.198844   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:43:20.199521   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:43:22.199847   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:43:24.701421   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:43:27.197993   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:43:29.199421   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:43:31.698713   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:43:33.699607   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:43:35.699881   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:43:38.201713   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:43:40.698432   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:43:42.701602   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:43:45.198644   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:43:47.199615   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:43:49.201244   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:43:51.700773   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:43:54.198211   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:43:56.199504   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:43:58.199886   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:44:00.701668   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:44:03.198508   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:44:05.199832   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:44:07.201160   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:44:09.700187   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:44:12.198162   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:44:14.699590   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:44:17.201231   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:44:19.201644   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:44:21.700760   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:44:24.198331   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:44:26.200019   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:44:28.697951   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:44:30.700375   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:44:33.198161   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:44:35.200608   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:44:37.201311   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:44:39.701639   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:44:42.199662   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:44:44.200038   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:44:46.201187   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:44:48.699620   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:44:50.700766   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:44:53.198826   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:44:55.199762   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:44:57.700308   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:45:00.198373   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:45:02.201124   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:45:04.698458   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:45:06.701791   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:45:09.199843   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:45:11.201740   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:45:13.699733   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:45:15.701432   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:45:17.701500   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:45:20.201677   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:45:22.699087   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:45:24.700069   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:45:27.199424   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:45:29.202330   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:45:31.698796   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:45:33.700218   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:45:36.198532   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:45:38.201867   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:45:40.698512   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:45:42.701718   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:45:45.198486   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:45:47.201980   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	I0108 13:45:49.698976   20152 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-w6p2j" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Sun 2023-01-08 21:28:12 UTC, end at Sun 2023-01-08 21:45:53 UTC. --
	Jan 08 21:28:15 old-k8s-version-132223 systemd[1]: Stopping Docker Application Container Engine...
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[128]: time="2023-01-08T21:28:15.083689881Z" level=info msg="Processing signal 'terminated'"
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[128]: time="2023-01-08T21:28:15.084513971Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[128]: time="2023-01-08T21:28:15.084732043Z" level=info msg="Daemon shutdown complete"
	Jan 08 21:28:15 old-k8s-version-132223 systemd[1]: docker.service: Succeeded.
	Jan 08 21:28:15 old-k8s-version-132223 systemd[1]: Stopped Docker Application Container Engine.
	Jan 08 21:28:15 old-k8s-version-132223 systemd[1]: Starting Docker Application Container Engine...
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.137878168Z" level=info msg="Starting up"
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.139628557Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.139673949Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.139695987Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.139707659Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.141213135Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.141257062Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.141279605Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.141290776Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.146303293Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.150605267Z" level=info msg="Loading containers: start."
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.229829971Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.260962319Z" level=info msg="Loading containers: done."
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.269713094Z" level=info msg="Docker daemon" commit=3056208 graphdriver(s)=overlay2 version=20.10.21
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.269774718Z" level=info msg="Daemon has completed initialization"
	Jan 08 21:28:15 old-k8s-version-132223 systemd[1]: Started Docker Application Container Engine.
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.295857338Z" level=info msg="API listen on [::]:2376"
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.298848948Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2023-01-08T21:45:55Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  21:45:55 up  1:45,  0 users,  load average: 0.44, 0.75, 1.03
	Linux old-k8s-version-132223 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sun 2023-01-08 21:28:12 UTC, end at Sun 2023-01-08 21:45:55 UTC. --
	Jan 08 21:45:54 old-k8s-version-132223 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 08 21:45:54 old-k8s-version-132223 kubelet[24794]: I0108 21:45:54.182615   24794 server.go:410] Version: v1.16.0
	Jan 08 21:45:54 old-k8s-version-132223 kubelet[24794]: I0108 21:45:54.182973   24794 plugins.go:100] No cloud provider specified.
	Jan 08 21:45:54 old-k8s-version-132223 kubelet[24794]: I0108 21:45:54.183009   24794 server.go:773] Client rotation is on, will bootstrap in background
	Jan 08 21:45:54 old-k8s-version-132223 kubelet[24794]: I0108 21:45:54.184731   24794 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 08 21:45:54 old-k8s-version-132223 kubelet[24794]: W0108 21:45:54.185448   24794 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 08 21:45:54 old-k8s-version-132223 kubelet[24794]: W0108 21:45:54.185514   24794 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 08 21:45:54 old-k8s-version-132223 kubelet[24794]: F0108 21:45:54.185538   24794 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 08 21:45:54 old-k8s-version-132223 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 08 21:45:54 old-k8s-version-132223 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 08 21:45:54 old-k8s-version-132223 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 927.
	Jan 08 21:45:54 old-k8s-version-132223 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 08 21:45:54 old-k8s-version-132223 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 08 21:45:54 old-k8s-version-132223 kubelet[24806]: I0108 21:45:54.944002   24806 server.go:410] Version: v1.16.0
	Jan 08 21:45:54 old-k8s-version-132223 kubelet[24806]: I0108 21:45:54.944259   24806 plugins.go:100] No cloud provider specified.
	Jan 08 21:45:54 old-k8s-version-132223 kubelet[24806]: I0108 21:45:54.944296   24806 server.go:773] Client rotation is on, will bootstrap in background
	Jan 08 21:45:54 old-k8s-version-132223 kubelet[24806]: I0108 21:45:54.946154   24806 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 08 21:45:54 old-k8s-version-132223 kubelet[24806]: W0108 21:45:54.946846   24806 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 08 21:45:54 old-k8s-version-132223 kubelet[24806]: W0108 21:45:54.946915   24806 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 08 21:45:54 old-k8s-version-132223 kubelet[24806]: F0108 21:45:54.946942   24806 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 08 21:45:54 old-k8s-version-132223 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 08 21:45:54 old-k8s-version-132223 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 08 21:45:55 old-k8s-version-132223 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 928.
	Jan 08 21:45:55 old-k8s-version-132223 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 08 21:45:55 old-k8s-version-132223 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 13:45:55.395765   20479 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-132223 -n old-k8s-version-132223
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-132223 -n old-k8s-version-132223: exit status 2 (403.569323ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-132223" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (574.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:46:40.037349    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
E0108 13:46:44.911834    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kindnet-130508/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:46:54.956280    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/calico-130509/client.crt: no such file or directory
E0108 13:46:59.134334    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/cilium-130509/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:47:55.406930    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/enable-default-cni-130508/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:48:14.387632    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/no-preload-132717/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:49:40.755501    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/skaffold-130345/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:49:59.697999    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
E0108 13:50:03.095693    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/bridge-130508/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:50:10.078203    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubenet-130508/client.crt: no such file or directory
E0108 13:50:16.985065    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:50:21.418674    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/auto-130508/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:51:43.346062    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/default-k8s-diff-port-134057/client.crt: no such file or directory
E0108 13:51:43.351880    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/default-k8s-diff-port-134057/client.crt: no such file or directory
E0108 13:51:43.362387    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/default-k8s-diff-port-134057/client.crt: no such file or directory
E0108 13:51:43.383479    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/default-k8s-diff-port-134057/client.crt: no such file or directory
E0108 13:51:43.424280    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/default-k8s-diff-port-134057/client.crt: no such file or directory
E0108 13:51:43.506490    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/default-k8s-diff-port-134057/client.crt: no such file or directory
E0108 13:51:43.668648    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/default-k8s-diff-port-134057/client.crt: no such file or directory
E0108 13:51:43.990869    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/default-k8s-diff-port-134057/client.crt: no such file or directory
E0108 13:51:44.631191    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/default-k8s-diff-port-134057/client.crt: no such file or directory
E0108 13:51:44.912838    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kindnet-130508/client.crt: no such file or directory
E0108 13:51:45.913414    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/default-k8s-diff-port-134057/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:51:48.475659    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/default-k8s-diff-port-134057/client.crt: no such file or directory
E0108 13:51:53.597939    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/default-k8s-diff-port-134057/client.crt: no such file or directory
E0108 13:51:54.959372    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/calico-130509/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0108 13:51:59.136504    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/cilium-130509/client.crt: no such file or directory
E0108 13:52:03.839247    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/default-k8s-diff-port-134057/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53994/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
E0108 13:52:24.320888    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/default-k8s-diff-port-134057/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0108 13:52:55.407632    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/enable-default-cni-130508/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0108 13:53:05.283301    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/default-k8s-diff-port-134057/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0108 13:53:14.390139    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/no-preload-132717/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0108 13:53:24.471053    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/auto-130508/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0108 13:53:59.444055    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0108 13:54:27.204273    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/default-k8s-diff-port-134057/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0108 13:54:37.436979    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/no-preload-132717/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0108 13:54:40.755840    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/skaffold-130345/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0108 13:54:47.963938    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kindnet-130508/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0108 13:54:59.698949    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0108 13:55:03.096481    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/bridge-130508/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-132223 -n old-k8s-version-132223
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-132223 -n old-k8s-version-132223: exit status 2 (399.324954ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-132223" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-132223 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-132223 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.753µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-132223 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-132223
helpers_test.go:235: (dbg) docker inspect old-k8s-version-132223:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f",
	        "Created": "2023-01-08T21:22:34.19825588Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 271181,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-08T21:28:12.59475468Z",
	            "FinishedAt": "2023-01-08T21:28:09.256674953Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f/hostname",
	        "HostsPath": "/var/lib/docker/containers/76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f/hosts",
	        "LogPath": "/var/lib/docker/containers/76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f/76595a40dec81e98dd84e567ca89023de1b2da94eb2eb207ad425bbebd3fd18f-json.log",
	        "Name": "/old-k8s-version-132223",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-132223:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-132223",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0669df26397a17aef68e70fb93bc70433c707e50b0b2a2cf95ad87e2e8d05b77-init/diff:/var/lib/docker/overlay2/cf478f0005761c12f45c53e8731191461bd51878189b802beb3f80527bc3582c/diff:/var/lib/docker/overlay2/50547848ed232979e0349fdf0641681247e43e6ddcd120dbefccdce45eba4793/diff:/var/lib/docker/overlay2/7a8415f97e49b013d35a8b27eaf2a6be470c2a985fcd6de4711cb0018f555a3d/diff:/var/lib/docker/overlay2/435dd0b905de8bd2d6b23782418e6d76b0957f55123fe106e3b62d08c0f3da13/diff:/var/lib/docker/overlay2/70ca2e846954d00d296abfcdcefb0db4959d8ce6650e54b1071b655f7c71c823/diff:/var/lib/docker/overlay2/62715d50ae74531df8ef33be95bc933c79334fbfa0ace0bad5efc678fb43d860/diff:/var/lib/docker/overlay2/857f757c27b37807332ef8a52061b2e02614567dadd8631c9414bcf1e51c7eb6/diff:/var/lib/docker/overlay2/d3d508987063e3e43530c93ff3bb9fc842f7f56e79f9babdb9a3608990dc911e/diff:/var/lib/docker/overlay2/b9307635c9b780f8ea6af04393e82329578be8ced22abd92633ac5912ce752d7/diff:/var/lib/docker/overlay2/ab3124
e34a60bd3d2f554d712f9db28fed57b9030105f996b2a40b6c5c68e6a3/diff:/var/lib/docker/overlay2/2664538922f7cea7eec3238db144935f7380d439e3aaf6611f7f6232515b6c70/diff:/var/lib/docker/overlay2/fcf4ff3c9f738d263ccde0d59a8f0bbbf77d5fe10a37a0b64782c90258c52f05/diff:/var/lib/docker/overlay2/9ebb5fb88ffad88aca62110ea1902a046eb8d27eab4d1b03380f2799a61190e4/diff:/var/lib/docker/overlay2/16c6977d1dcb3aef6968fa378be9d39da565962707fb1c2ebcc08741b3ebabb0/diff:/var/lib/docker/overlay2/4a1a615ba2290b96a2289b3709f9e4e2b7585a7880463549ed90c765c1cf364b/diff:/var/lib/docker/overlay2/8875d4ae4e008b8ed7a6c64b581bc9a7437e20bc59a10db038658c3c3abbd626/diff:/var/lib/docker/overlay2/a92bc2bed5e566a6a12e091f0b6adcc5120ec1a5a04a079614da38b8e08b4f4d/diff:/var/lib/docker/overlay2/507f4a1c4f60a4445244bd4611fbdebeda31c842886f650aff0c93fe1cbf551b/diff:/var/lib/docker/overlay2/4b6f57707d2af391e02b8fbab74a152c38778d850194db7c366c972d607c3683/diff:/var/lib/docker/overlay2/30f07cc70078d1a1064ae4c014017806ca9cab561445ba4999d279d77ab9efd9/diff:/var/lib/d
ocker/overlay2/a7ce66498ad28650a9c447ffdd1776688091a1f96a77ba104690bbd632828084/diff:/var/lib/docker/overlay2/375e879a1c9abf773aadafa9214b4cd6a5fa848c3521ded951069c1ef16d03c8/diff:/var/lib/docker/overlay2/dbf6bd39c4440680d1fb7dcfc66134acd119d818a0da224feea03b15985518ef/diff:/var/lib/docker/overlay2/f5247f50460095d94d94f10c8f29a1106915f3f694a40dbc0ff0a7494ceef2d6/diff:/var/lib/docker/overlay2/eca77ea4b87f19d3e4b6258b307c944a60d8a11e38e520715736d86cfcb0a340/diff:/var/lib/docker/overlay2/af8edadcadb813c9b8bcb395db5b7025128f75336edf043daf159e86115fa2d0/diff:/var/lib/docker/overlay2/82696f404a416ef0c49184f767d3a67d76997ca4b3ab9f2553ab364b9e902189/diff:/var/lib/docker/overlay2/aa5f3a92ab78aa13af6b0e4ca676e887e32b388ad037098956622b2bb2d64653/diff:/var/lib/docker/overlay2/3fd93bd37311284bcd588f06d2e1157fcae183e793e58b9e91af55526752251b/diff:/var/lib/docker/overlay2/5cac080397d4de235a72e46ee68fdd622d9fba1dbd60139a59881df7cb97cdd3/diff:/var/lib/docker/overlay2/1534f7a89f3f0459a57d2264ddb9c4b2e95b9348c6c3fb6839c3f2cd1aa
7009a/diff:/var/lib/docker/overlay2/0fa983ab9147631e9188574a597cbb1ada8bd69b4eff49391c9704d239988f73/diff:/var/lib/docker/overlay2/2ff1f973faf98b7d46648d22c4c0cb73675d5b3f37e6906c457a45823a29fe1e/diff:/var/lib/docker/overlay2/1d56ab53b6c377c5835e50d09effb1a1a727279cb8883e5d4cda8c35b4600695/diff:/var/lib/docker/overlay2/903da5933dc4be1a0f9e38defe40072a669562fc25c401b8b9a02def3b94bec6/diff:/var/lib/docker/overlay2/4be7777ae41ce96ae10877862b8954fa1ee593061f9647f30de2ccdd036bb452/diff:/var/lib/docker/overlay2/ae284268a6cd8a67190129d99bdb6a97d27c88bfe4536cbdf20bc356c6cb5ad4/diff:/var/lib/docker/overlay2/207f47b4e74ecca6010612742ebe5cd0c8363dd1634d58f37b9df57cefc063f2/diff:/var/lib/docker/overlay2/65d59701773a038dc5533dece8ebc52ebf3efc833e94c91c470d1f6593bdf196/diff:/var/lib/docker/overlay2/3ae8859886568a0e539b79f17ace58f390ab402b4428c45188c2587640d73f10/diff:/var/lib/docker/overlay2/bf63d45714e6f77ee9a5cf0fd198e479af953d7ea25a6f1f76633e63bd9b827f/diff:/var/lib/docker/overlay2/ac8c76daac6f3c2d9c8ceee7ed9defe04f1a31
f0271684f4258c0f634ed1fce1/diff:/var/lib/docker/overlay2/1cd45a0f7910466989a7434f8eec249f0e295b686baad0e434a2d34dd6e82a47/diff:/var/lib/docker/overlay2/d72980245e92027e64b68ee0fc086b48f102ea405ffbebfd8220036fdbe805d6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0669df26397a17aef68e70fb93bc70433c707e50b0b2a2cf95ad87e2e8d05b77/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0669df26397a17aef68e70fb93bc70433c707e50b0b2a2cf95ad87e2e8d05b77/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0669df26397a17aef68e70fb93bc70433c707e50b0b2a2cf95ad87e2e8d05b77/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-132223",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-132223/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-132223",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-132223",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-132223",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cccd9a830563cfdf91ce0bdb68c2ca01360d5f5427e33608df2cedf47fdf29aa",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53990"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53991"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53992"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53993"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53994"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/cccd9a830563",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-132223": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "76595a40dec8",
	                        "old-k8s-version-132223"
	                    ],
	                    "NetworkID": "8205ca6e86e721bc270dfbf0384edb3c10ca81d0afb1c6b7756a52514e9f6e59",
	                    "EndpointID": "5469fce4ff9eff554edf16f9ae862fd31a7797080d7ceac6012e86a1c678033f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-132223 -n old-k8s-version-132223
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-132223 -n old-k8s-version-132223: exit status 2 (396.57161ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-132223 logs -n 25
E0108 13:55:10.077615    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubenet-130508/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-132223 logs -n 25: (3.450467032s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p embed-certs-133414                                      | embed-certs-133414           | jenkins | v1.28.0 | 08 Jan 23 13:40 PST | 08 Jan 23 13:40 PST |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p embed-certs-133414                                      | embed-certs-133414           | jenkins | v1.28.0 | 08 Jan 23 13:40 PST | 08 Jan 23 13:40 PST |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p embed-certs-133414                                      | embed-certs-133414           | jenkins | v1.28.0 | 08 Jan 23 13:40 PST | 08 Jan 23 13:40 PST |
	| delete  | -p embed-certs-133414                                      | embed-certs-133414           | jenkins | v1.28.0 | 08 Jan 23 13:40 PST | 08 Jan 23 13:40 PST |
	| delete  | -p                                                         | disable-driver-mounts-134056 | jenkins | v1.28.0 | 08 Jan 23 13:40 PST | 08 Jan 23 13:40 PST |
	|         | disable-driver-mounts-134056                               |                              |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-diff-port-134057 | jenkins | v1.28.0 | 08 Jan 23 13:40 PST | 08 Jan 23 13:41 PST |
	|         | default-k8s-diff-port-134057                               |                              |         |         |                     |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                      |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-diff-port-134057 | jenkins | v1.28.0 | 08 Jan 23 13:41 PST | 08 Jan 23 13:41 PST |
	|         | default-k8s-diff-port-134057                               |                              |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p                                                         | default-k8s-diff-port-134057 | jenkins | v1.28.0 | 08 Jan 23 13:41 PST | 08 Jan 23 13:42 PST |
	|         | default-k8s-diff-port-134057                               |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-134057           | default-k8s-diff-port-134057 | jenkins | v1.28.0 | 08 Jan 23 13:42 PST | 08 Jan 23 13:42 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-diff-port-134057 | jenkins | v1.28.0 | 08 Jan 23 13:42 PST | 08 Jan 23 13:47 PST |
	|         | default-k8s-diff-port-134057                               |                              |         |         |                     |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                      |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| ssh     | -p                                                         | default-k8s-diff-port-134057 | jenkins | v1.28.0 | 08 Jan 23 13:47 PST | 08 Jan 23 13:47 PST |
	|         | default-k8s-diff-port-134057                               |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                              |         |         |                     |                     |
	| pause   | -p                                                         | default-k8s-diff-port-134057 | jenkins | v1.28.0 | 08 Jan 23 13:47 PST | 08 Jan 23 13:47 PST |
	|         | default-k8s-diff-port-134057                               |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p                                                         | default-k8s-diff-port-134057 | jenkins | v1.28.0 | 08 Jan 23 13:47 PST | 08 Jan 23 13:47 PST |
	|         | default-k8s-diff-port-134057                               |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-diff-port-134057 | jenkins | v1.28.0 | 08 Jan 23 13:47 PST | 08 Jan 23 13:47 PST |
	|         | default-k8s-diff-port-134057                               |                              |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-diff-port-134057 | jenkins | v1.28.0 | 08 Jan 23 13:47 PST | 08 Jan 23 13:47 PST |
	|         | default-k8s-diff-port-134057                               |                              |         |         |                     |                     |
	| start   | -p newest-cni-134733 --memory=2200 --alsologtostderr       | newest-cni-134733            | jenkins | v1.28.0 | 08 Jan 23 13:47 PST | 08 Jan 23 13:48 PST |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.25.3              |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-134733                 | newest-cni-134733            | jenkins | v1.28.0 | 08 Jan 23 13:48 PST | 08 Jan 23 13:48 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p newest-cni-134733                                       | newest-cni-134733            | jenkins | v1.28.0 | 08 Jan 23 13:48 PST | 08 Jan 23 13:48 PST |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-134733                      | newest-cni-134733            | jenkins | v1.28.0 | 08 Jan 23 13:48 PST | 08 Jan 23 13:48 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p newest-cni-134733 --memory=2200 --alsologtostderr       | newest-cni-134733            | jenkins | v1.28.0 | 08 Jan 23 13:48 PST | 08 Jan 23 13:48 PST |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.25.3              |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-134733 sudo                                  | newest-cni-134733            | jenkins | v1.28.0 | 08 Jan 23 13:48 PST | 08 Jan 23 13:48 PST |
	|         | crictl images -o json                                      |                              |         |         |                     |                     |
	| pause   | -p newest-cni-134733                                       | newest-cni-134733            | jenkins | v1.28.0 | 08 Jan 23 13:48 PST | 08 Jan 23 13:48 PST |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p newest-cni-134733                                       | newest-cni-134733            | jenkins | v1.28.0 | 08 Jan 23 13:48 PST | 08 Jan 23 13:48 PST |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p newest-cni-134733                                       | newest-cni-134733            | jenkins | v1.28.0 | 08 Jan 23 13:48 PST | 08 Jan 23 13:48 PST |
	| delete  | -p newest-cni-134733                                       | newest-cni-134733            | jenkins | v1.28.0 | 08 Jan 23 13:48 PST | 08 Jan 23 13:48 PST |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/08 13:48:30
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.19.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 13:48:30.315580   20977 out.go:296] Setting OutFile to fd 1 ...
	I0108 13:48:30.315757   20977 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 13:48:30.315763   20977 out.go:309] Setting ErrFile to fd 2...
	I0108 13:48:30.315767   20977 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 13:48:30.315890   20977 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-2761/.minikube/bin
	I0108 13:48:30.316381   20977 out.go:303] Setting JSON to false
	I0108 13:48:30.335003   20977 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":6483,"bootTime":1673208027,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0108 13:48:30.335111   20977 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0108 13:48:30.356984   20977 out.go:177] * [newest-cni-134733] minikube v1.28.0 on Darwin 13.0.1
	I0108 13:48:30.399750   20977 notify.go:220] Checking for updates...
	I0108 13:48:30.421859   20977 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 13:48:30.444001   20977 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2761/kubeconfig
	I0108 13:48:30.465789   20977 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0108 13:48:30.487957   20977 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 13:48:30.509788   20977 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube
	I0108 13:48:30.531851   20977 config.go:180] Loaded profile config "newest-cni-134733": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 13:48:30.532200   20977 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 13:48:30.592143   20977 docker.go:137] docker version: linux-20.10.21
	I0108 13:48:30.592285   20977 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 13:48:30.732363   20977 info.go:266] docker info: {ID:5ZGC:6Z7C:CMVQ:QZDS:ZWTS:M4CQ:373D:RAJX:R4QB:QI3D:27UQ:FR5B Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-08 21:48:30.642474885 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0108 13:48:30.776198   20977 out.go:177] * Using the docker driver based on existing profile
	I0108 13:48:30.797966   20977 start.go:294] selected driver: docker
	I0108 13:48:30.798072   20977 start.go:838] validating driver "docker" against &{Name:newest-cni-134733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-134733 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netw
ork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 13:48:30.798235   20977 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 13:48:30.802074   20977 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 13:48:30.948921   20977 info.go:266] docker info: {ID:5ZGC:6Z7C:CMVQ:QZDS:ZWTS:M4CQ:373D:RAJX:R4QB:QI3D:27UQ:FR5B Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-08 21:48:30.852559322 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0108 13:48:30.949132   20977 start_flags.go:929] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0108 13:48:30.949165   20977 cni.go:95] Creating CNI manager for ""
	I0108 13:48:30.949190   20977 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0108 13:48:30.949209   20977 start_flags.go:317] config:
	{Name:newest-cni-134733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-134733 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 13:48:30.992969   20977 out.go:177] * Starting control plane node newest-cni-134733 in cluster newest-cni-134733
	I0108 13:48:31.016070   20977 cache.go:120] Beginning downloading kic base image for docker with docker
	I0108 13:48:31.038098   20977 out.go:177] * Pulling base image ...
	I0108 13:48:31.081898   20977 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0108 13:48:31.081924   20977 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 13:48:31.081999   20977 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I0108 13:48:31.082014   20977 cache.go:57] Caching tarball of preloaded images
	I0108 13:48:31.082228   20977 preload.go:174] Found /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 13:48:31.082251   20977 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I0108 13:48:31.083279   20977 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/newest-cni-134733/config.json ...
	I0108 13:48:31.203418   20977 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0108 13:48:31.203435   20977 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0108 13:48:31.203454   20977 cache.go:193] Successfully downloaded all kic artifacts
	I0108 13:48:31.203497   20977 start.go:364] acquiring machines lock for newest-cni-134733: {Name:mk4bad96f8cfc69512aea74c09f553d59f6d1dcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 13:48:31.203580   20977 start.go:368] acquired machines lock for "newest-cni-134733" in 64.05µs
	I0108 13:48:31.203605   20977 start.go:96] Skipping create...Using existing machine configuration
	I0108 13:48:31.203614   20977 fix.go:55] fixHost starting: 
	I0108 13:48:31.203881   20977 cli_runner.go:164] Run: docker container inspect newest-cni-134733 --format={{.State.Status}}
	I0108 13:48:31.260476   20977 fix.go:103] recreateIfNeeded on newest-cni-134733: state=Stopped err=<nil>
	W0108 13:48:31.260504   20977 fix.go:129] unexpected machine state, will restart: <nil>
	I0108 13:48:31.304410   20977 out.go:177] * Restarting existing docker container for "newest-cni-134733" ...
	I0108 13:48:31.326616   20977 cli_runner.go:164] Run: docker start newest-cni-134733
	I0108 13:48:31.668002   20977 cli_runner.go:164] Run: docker container inspect newest-cni-134733 --format={{.State.Status}}
	I0108 13:48:31.735011   20977 kic.go:415] container "newest-cni-134733" state is running.
	I0108 13:48:31.735659   20977 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-134733
	I0108 13:48:31.802939   20977 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/newest-cni-134733/config.json ...
	I0108 13:48:31.803521   20977 machine.go:88] provisioning docker machine ...
	I0108 13:48:31.803551   20977 ubuntu.go:169] provisioning hostname "newest-cni-134733"
	I0108 13:48:31.803643   20977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-134733
	I0108 13:48:31.875318   20977 main.go:134] libmachine: Using SSH client type: native
	I0108 13:48:31.875516   20977 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 55270 <nil> <nil>}
	I0108 13:48:31.875531   20977 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-134733 && echo "newest-cni-134733" | sudo tee /etc/hostname
	I0108 13:48:32.008341   20977 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-134733
	
	I0108 13:48:32.008459   20977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-134733
	I0108 13:48:32.071185   20977 main.go:134] libmachine: Using SSH client type: native
	I0108 13:48:32.071372   20977 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 55270 <nil> <nil>}
	I0108 13:48:32.071386   20977 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-134733' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-134733/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-134733' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 13:48:32.190541   20977 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 13:48:32.190568   20977 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-2761/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-2761/.minikube}
	I0108 13:48:32.190605   20977 ubuntu.go:177] setting up certificates
	I0108 13:48:32.190614   20977 provision.go:83] configureAuth start
	I0108 13:48:32.190713   20977 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-134733
	I0108 13:48:32.254196   20977 provision.go:138] copyHostCerts
	I0108 13:48:32.254310   20977 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem, removing ...
	I0108 13:48:32.254321   20977 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem
	I0108 13:48:32.254423   20977 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem (1082 bytes)
	I0108 13:48:32.254644   20977 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem, removing ...
	I0108 13:48:32.254653   20977 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem
	I0108 13:48:32.254719   20977 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem (1123 bytes)
	I0108 13:48:32.254901   20977 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem, removing ...
	I0108 13:48:32.254907   20977 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem
	I0108 13:48:32.254970   20977 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem (1675 bytes)
	I0108 13:48:32.255100   20977 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem org=jenkins.newest-cni-134733 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-134733]
	I0108 13:48:32.496924   20977 provision.go:172] copyRemoteCerts
	I0108 13:48:32.497002   20977 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 13:48:32.497078   20977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-134733
	I0108 13:48:32.560594   20977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55270 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/newest-cni-134733/id_rsa Username:docker}
	I0108 13:48:32.646597   20977 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 13:48:32.665452   20977 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0108 13:48:32.684815   20977 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 13:48:32.704365   20977 provision.go:86] duration metric: configureAuth took 513.721988ms
	I0108 13:48:32.704384   20977 ubuntu.go:193] setting minikube options for container-runtime
	I0108 13:48:32.704556   20977 config.go:180] Loaded profile config "newest-cni-134733": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 13:48:32.704634   20977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-134733
	I0108 13:48:32.767775   20977 main.go:134] libmachine: Using SSH client type: native
	I0108 13:48:32.767946   20977 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 55270 <nil> <nil>}
	I0108 13:48:32.767955   20977 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0108 13:48:32.886521   20977 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0108 13:48:32.886534   20977 ubuntu.go:71] root file system type: overlay
	I0108 13:48:32.886670   20977 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0108 13:48:32.886767   20977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-134733
	I0108 13:48:32.946801   20977 main.go:134] libmachine: Using SSH client type: native
	I0108 13:48:32.946976   20977 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 55270 <nil> <nil>}
	I0108 13:48:32.947026   20977 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0108 13:48:33.074917   20977 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0108 13:48:33.075030   20977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-134733
	I0108 13:48:33.135501   20977 main.go:134] libmachine: Using SSH client type: native
	I0108 13:48:33.135667   20977 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 55270 <nil> <nil>}
	I0108 13:48:33.135681   20977 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0108 13:48:33.256958   20977 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0108 13:48:33.256981   20977 machine.go:91] provisioned docker machine in 1.453445739s
	I0108 13:48:33.256992   20977 start.go:300] post-start starting for "newest-cni-134733" (driver="docker")
	I0108 13:48:33.256998   20977 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 13:48:33.257096   20977 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 13:48:33.257164   20977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-134733
	I0108 13:48:33.316884   20977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55270 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/newest-cni-134733/id_rsa Username:docker}
	I0108 13:48:33.404107   20977 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 13:48:33.407757   20977 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 13:48:33.407773   20977 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 13:48:33.407786   20977 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 13:48:33.407791   20977 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0108 13:48:33.407799   20977 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2761/.minikube/addons for local assets ...
	I0108 13:48:33.407894   20977 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2761/.minikube/files for local assets ...
	I0108 13:48:33.408068   20977 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem -> 40832.pem in /etc/ssl/certs
	I0108 13:48:33.408276   20977 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 13:48:33.415612   20977 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem --> /etc/ssl/certs/40832.pem (1708 bytes)
	I0108 13:48:33.432906   20977 start.go:303] post-start completed in 175.903042ms
	I0108 13:48:33.432993   20977 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 13:48:33.433058   20977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-134733
	I0108 13:48:33.491482   20977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55270 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/newest-cni-134733/id_rsa Username:docker}
	I0108 13:48:33.576500   20977 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 13:48:33.581069   20977 fix.go:57] fixHost completed within 2.377442715s
	I0108 13:48:33.581082   20977 start.go:83] releasing machines lock for "newest-cni-134733", held for 2.377485998s
	I0108 13:48:33.581196   20977 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-134733
	I0108 13:48:33.639810   20977 ssh_runner.go:195] Run: cat /version.json
	I0108 13:48:33.639820   20977 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 13:48:33.639896   20977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-134733
	I0108 13:48:33.639900   20977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-134733
	I0108 13:48:33.703639   20977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55270 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/newest-cni-134733/id_rsa Username:docker}
	I0108 13:48:33.703730   20977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55270 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/newest-cni-134733/id_rsa Username:docker}
	I0108 13:48:33.788294   20977 ssh_runner.go:195] Run: systemctl --version
	I0108 13:48:33.849791   20977 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0108 13:48:33.857541   20977 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I0108 13:48:33.871105   20977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 13:48:33.950841   20977 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0108 13:48:34.036143   20977 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0108 13:48:34.046863   20977 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0108 13:48:34.046936   20977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 13:48:34.056839   20977 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 13:48:34.069863   20977 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0108 13:48:34.136545   20977 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0108 13:48:34.208149   20977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 13:48:34.275624   20977 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 13:48:34.518345   20977 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0108 13:48:34.585331   20977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 13:48:34.656735   20977 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0108 13:48:34.667514   20977 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0108 13:48:34.667612   20977 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0108 13:48:34.671962   20977 start.go:472] Will wait 60s for crictl version
	I0108 13:48:34.672032   20977 ssh_runner.go:195] Run: sudo crictl version
	I0108 13:48:34.706130   20977 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.21
	RuntimeApiVersion:  1.41.0
	I0108 13:48:34.706224   20977 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 13:48:34.735146   20977 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 13:48:34.817040   20977 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
	I0108 13:48:34.817222   20977 cli_runner.go:164] Run: docker exec -t newest-cni-134733 dig +short host.docker.internal
	I0108 13:48:34.936609   20977 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0108 13:48:34.936721   20977 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0108 13:48:34.941201   20977 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 13:48:34.951227   20977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-134733
	I0108 13:48:35.031719   20977 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0108 13:48:35.053618   20977 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0108 13:48:35.053799   20977 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 13:48:35.079630   20977 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0108 13:48:35.079651   20977 docker.go:543] Images already preloaded, skipping extraction
	I0108 13:48:35.079755   20977 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 13:48:35.104076   20977 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0108 13:48:35.104094   20977 cache_images.go:84] Images are preloaded, skipping loading
	I0108 13:48:35.104206   20977 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0108 13:48:35.174357   20977 cni.go:95] Creating CNI manager for ""
	I0108 13:48:35.174374   20977 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0108 13:48:35.174432   20977 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0108 13:48:35.174444   20977 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-134733 NodeName:newest-cni-134733 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArg
s:map[] NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0108 13:48:35.174599   20977 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-134733"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 13:48:35.174739   20977 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-134733 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:newest-cni-134733 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 13:48:35.174850   20977 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0108 13:48:35.183507   20977 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 13:48:35.183586   20977 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 13:48:35.191076   20977 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (516 bytes)
	I0108 13:48:35.204213   20977 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 13:48:35.217043   20977 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2175 bytes)
	I0108 13:48:35.230304   20977 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0108 13:48:35.234288   20977 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 13:48:35.244416   20977 certs.go:54] Setting up /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/newest-cni-134733 for IP: 192.168.67.2
	I0108 13:48:35.244532   20977 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.key
	I0108 13:48:35.244584   20977 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.key
	I0108 13:48:35.244681   20977 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/newest-cni-134733/client.key
	I0108 13:48:35.244747   20977 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/newest-cni-134733/apiserver.key.c7fa3a9e
	I0108 13:48:35.244810   20977 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/newest-cni-134733/proxy-client.key
	I0108 13:48:35.245051   20977 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083.pem (1338 bytes)
	W0108 13:48:35.245093   20977 certs.go:384] ignoring /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083_empty.pem, impossibly tiny 0 bytes
	I0108 13:48:35.245105   20977 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 13:48:35.245141   20977 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem (1082 bytes)
	I0108 13:48:35.245178   20977 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem (1123 bytes)
	I0108 13:48:35.245211   20977 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem (1675 bytes)
	I0108 13:48:35.245287   20977 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem (1708 bytes)
	I0108 13:48:35.245867   20977 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/newest-cni-134733/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 13:48:35.263648   20977 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/newest-cni-134733/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 13:48:35.281316   20977 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/newest-cni-134733/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 13:48:35.299630   20977 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/newest-cni-134733/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 13:48:35.317113   20977 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 13:48:35.356293   20977 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 13:48:35.376288   20977 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 13:48:35.396424   20977 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 13:48:35.415502   20977 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083.pem --> /usr/share/ca-certificates/4083.pem (1338 bytes)
	I0108 13:48:35.434515   20977 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem --> /usr/share/ca-certificates/40832.pem (1708 bytes)
	I0108 13:48:35.453110   20977 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 13:48:35.470871   20977 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 13:48:35.484463   20977 ssh_runner.go:195] Run: openssl version
	I0108 13:48:35.490755   20977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4083.pem && ln -fs /usr/share/ca-certificates/4083.pem /etc/ssl/certs/4083.pem"
	I0108 13:48:35.499112   20977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4083.pem
	I0108 13:48:35.503384   20977 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:32 /usr/share/ca-certificates/4083.pem
	I0108 13:48:35.503434   20977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4083.pem
	I0108 13:48:35.509443   20977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4083.pem /etc/ssl/certs/51391683.0"
	I0108 13:48:35.517405   20977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/40832.pem && ln -fs /usr/share/ca-certificates/40832.pem /etc/ssl/certs/40832.pem"
	I0108 13:48:35.525847   20977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/40832.pem
	I0108 13:48:35.529882   20977 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:32 /usr/share/ca-certificates/40832.pem
	I0108 13:48:35.529940   20977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/40832.pem
	I0108 13:48:35.535468   20977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/40832.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 13:48:35.543386   20977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 13:48:35.551960   20977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 13:48:35.556255   20977 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:27 /usr/share/ca-certificates/minikubeCA.pem
	I0108 13:48:35.556319   20977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 13:48:35.561780   20977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 13:48:35.569560   20977 kubeadm.go:396] StartCluster: {Name:newest-cni-134733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-134733 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNo
deRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 13:48:35.569695   20977 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 13:48:35.594123   20977 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 13:48:35.602448   20977 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0108 13:48:35.602462   20977 kubeadm.go:627] restartCluster start
	I0108 13:48:35.602517   20977 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 13:48:35.609606   20977 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:48:35.609768   20977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-134733
	I0108 13:48:35.668820   20977 kubeconfig.go:135] verify returned: extract IP: "newest-cni-134733" does not appear in /Users/jenkins/minikube-integration/15565-2761/kubeconfig
	I0108 13:48:35.668979   20977 kubeconfig.go:146] "newest-cni-134733" context is missing from /Users/jenkins/minikube-integration/15565-2761/kubeconfig - will repair!
	I0108 13:48:35.669272   20977 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/kubeconfig: {Name:mk71550ab701dee908d8134473648649a6392238 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:48:35.670627   20977 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 13:48:35.678590   20977 api_server.go:165] Checking apiserver status ...
	I0108 13:48:35.678657   20977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:48:35.687336   20977 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:48:35.889396   20977 api_server.go:165] Checking apiserver status ...
	I0108 13:48:35.889539   20977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:48:35.900026   20977 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:48:36.088222   20977 api_server.go:165] Checking apiserver status ...
	I0108 13:48:36.088423   20977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:48:36.099326   20977 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:48:36.287874   20977 api_server.go:165] Checking apiserver status ...
	I0108 13:48:36.288045   20977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:48:36.299087   20977 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:48:36.487903   20977 api_server.go:165] Checking apiserver status ...
	I0108 13:48:36.488023   20977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:48:36.499081   20977 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:48:36.689491   20977 api_server.go:165] Checking apiserver status ...
	I0108 13:48:36.689649   20977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:48:36.700605   20977 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:48:36.887783   20977 api_server.go:165] Checking apiserver status ...
	I0108 13:48:36.887850   20977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:48:36.897143   20977 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:48:37.089048   20977 api_server.go:165] Checking apiserver status ...
	I0108 13:48:37.089197   20977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:48:37.099965   20977 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:48:37.289521   20977 api_server.go:165] Checking apiserver status ...
	I0108 13:48:37.289676   20977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:48:37.300897   20977 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:48:37.488065   20977 api_server.go:165] Checking apiserver status ...
	I0108 13:48:37.488245   20977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:48:37.499163   20977 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:48:37.688642   20977 api_server.go:165] Checking apiserver status ...
	I0108 13:48:37.688776   20977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:48:37.699309   20977 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:48:37.888192   20977 api_server.go:165] Checking apiserver status ...
	I0108 13:48:37.888323   20977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:48:37.899112   20977 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:48:38.087864   20977 api_server.go:165] Checking apiserver status ...
	I0108 13:48:38.087987   20977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:48:38.099008   20977 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:48:38.288515   20977 api_server.go:165] Checking apiserver status ...
	I0108 13:48:38.288699   20977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:48:38.299563   20977 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:48:38.487532   20977 api_server.go:165] Checking apiserver status ...
	I0108 13:48:38.487691   20977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:48:38.498877   20977 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:48:38.687785   20977 api_server.go:165] Checking apiserver status ...
	I0108 13:48:38.687948   20977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:48:38.699126   20977 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:48:38.699139   20977 api_server.go:165] Checking apiserver status ...
	I0108 13:48:38.699200   20977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 13:48:38.707647   20977 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:48:38.707659   20977 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0108 13:48:38.707667   20977 kubeadm.go:1114] stopping kube-system containers ...
	I0108 13:48:38.707742   20977 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 13:48:38.733435   20977 docker.go:444] Stopping containers: [add0083b826f 0c305d85b9af 4b74690db8b0 acb0c14bab9e e924d76e1e27 e4c6485780b0 81dd65bcf433 2f6ac23af2cc 9e5a69f6ac65 b510821d6a34 1b247ba26df6 90ea21013cbc 76f214e106fc eb1f4b14046b 9b207f80aef3 fdd60ee5b12c]
	I0108 13:48:38.733532   20977 ssh_runner.go:195] Run: docker stop add0083b826f 0c305d85b9af 4b74690db8b0 acb0c14bab9e e924d76e1e27 e4c6485780b0 81dd65bcf433 2f6ac23af2cc 9e5a69f6ac65 b510821d6a34 1b247ba26df6 90ea21013cbc 76f214e106fc eb1f4b14046b 9b207f80aef3 fdd60ee5b12c
	I0108 13:48:38.760545   20977 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 13:48:38.771005   20977 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 13:48:38.778916   20977 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jan  8 21:47 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jan  8 21:47 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jan  8 21:48 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan  8 21:47 /etc/kubernetes/scheduler.conf
	
	I0108 13:48:38.778980   20977 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0108 13:48:38.786844   20977 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0108 13:48:38.794428   20977 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0108 13:48:38.801918   20977 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:48:38.801978   20977 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0108 13:48:38.809108   20977 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0108 13:48:38.816372   20977 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0108 13:48:38.816435   20977 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0108 13:48:38.823498   20977 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 13:48:38.831034   20977 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 13:48:38.831048   20977 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 13:48:38.882558   20977 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 13:48:39.508624   20977 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 13:48:39.639254   20977 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 13:48:39.694117   20977 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 13:48:39.783284   20977 api_server.go:51] waiting for apiserver process to appear ...
	I0108 13:48:39.783392   20977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:48:40.295636   20977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:48:40.795192   20977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:48:41.295208   20977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:48:41.351943   20977 api_server.go:71] duration metric: took 1.568658277s to wait for apiserver process to appear ...
	I0108 13:48:41.351976   20977 api_server.go:87] waiting for apiserver healthz status ...
	I0108 13:48:41.352005   20977 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:55274/healthz ...
	I0108 13:48:44.547043   20977 api_server.go:278] https://127.0.0.1:55274/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 13:48:44.547066   20977 api_server.go:102] status: https://127.0.0.1:55274/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 13:48:45.047216   20977 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:55274/healthz ...
	I0108 13:48:45.054469   20977 api_server.go:278] https://127.0.0.1:55274/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 13:48:45.054511   20977 api_server.go:102] status: https://127.0.0.1:55274/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 13:48:45.547295   20977 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:55274/healthz ...
	I0108 13:48:45.555456   20977 api_server.go:278] https://127.0.0.1:55274/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 13:48:45.555475   20977 api_server.go:102] status: https://127.0.0.1:55274/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 13:48:46.047523   20977 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:55274/healthz ...
	I0108 13:48:46.056755   20977 api_server.go:278] https://127.0.0.1:55274/healthz returned 200:
	ok
	I0108 13:48:46.067492   20977 api_server.go:140] control plane version: v1.25.3
	I0108 13:48:46.067508   20977 api_server.go:130] duration metric: took 4.715505668s to wait for apiserver health ...
	I0108 13:48:46.067516   20977 cni.go:95] Creating CNI manager for ""
	I0108 13:48:46.067522   20977 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0108 13:48:46.067533   20977 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 13:48:46.079549   20977 system_pods.go:59] 8 kube-system pods found
	I0108 13:48:46.079576   20977 system_pods.go:61] "coredns-565d847f94-g6qsh" [076ac27f-b652-4d28-9197-b128a2b49d25] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 13:48:46.079583   20977 system_pods.go:61] "etcd-newest-cni-134733" [f9cf22af-97f2-4321-8dd6-2e15fbf0015e] Running
	I0108 13:48:46.079595   20977 system_pods.go:61] "kube-apiserver-newest-cni-134733" [d0116b93-577a-4b17-947b-a713e5c14a32] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 13:48:46.079603   20977 system_pods.go:61] "kube-controller-manager-newest-cni-134733" [6e25bf06-327b-42a3-9041-0820b56ad83d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 13:48:46.079611   20977 system_pods.go:61] "kube-proxy-hhnrj" [9c21f333-fa2b-4c52-b85f-2c1ab9160293] Running
	I0108 13:48:46.079617   20977 system_pods.go:61] "kube-scheduler-newest-cni-134733" [21e9ce4f-5437-46c3-9674-d146ae26ea05] Running
	I0108 13:48:46.079631   20977 system_pods.go:61] "metrics-server-5c8fd5cf8-pkv86" [cdc0296f-59fc-4b34-865a-e8dfb1851191] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 13:48:46.079643   20977 system_pods.go:61] "storage-provisioner" [e7891493-a9ec-45f5-9540-784ceab1b93c] Running
	I0108 13:48:46.079652   20977 system_pods.go:74] duration metric: took 12.11084ms to wait for pod list to return data ...
	I0108 13:48:46.079665   20977 node_conditions.go:102] verifying NodePressure condition ...
	I0108 13:48:46.084961   20977 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0108 13:48:46.084980   20977 node_conditions.go:123] node cpu capacity is 6
	I0108 13:48:46.084989   20977 node_conditions.go:105] duration metric: took 5.316596ms to run NodePressure ...
	I0108 13:48:46.085002   20977 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 13:48:46.397692   20977 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 13:48:46.457194   20977 ops.go:34] apiserver oom_adj: -16
	I0108 13:48:46.457208   20977 kubeadm.go:631] restartCluster took 10.854698961s
	I0108 13:48:46.457219   20977 kubeadm.go:398] StartCluster complete in 10.887622137s
	I0108 13:48:46.457237   20977 settings.go:142] acquiring lock: {Name:mkc40aeb9f069e96cc5c51255984662f0292a058 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:48:46.457348   20977 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15565-2761/kubeconfig
	I0108 13:48:46.457957   20977 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/kubeconfig: {Name:mk71550ab701dee908d8134473648649a6392238 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 13:48:46.462045   20977 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-134733" rescaled to 1
	I0108 13:48:46.462118   20977 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0108 13:48:46.462139   20977 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 13:48:46.462173   20977 addons.go:486] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0108 13:48:46.485204   20977 out.go:177] * Verifying Kubernetes components...
	I0108 13:48:46.462386   20977 config.go:180] Loaded profile config "newest-cni-134733": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 13:48:46.485267   20977 addons.go:65] Setting default-storageclass=true in profile "newest-cni-134733"
	I0108 13:48:46.485268   20977 addons.go:65] Setting metrics-server=true in profile "newest-cni-134733"
	I0108 13:48:46.485278   20977 addons.go:65] Setting dashboard=true in profile "newest-cni-134733"
	I0108 13:48:46.485277   20977 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-134733"
	I0108 13:48:46.526327   20977 addons.go:227] Setting addon storage-provisioner=true in "newest-cni-134733"
	I0108 13:48:46.526357   20977 addons.go:227] Setting addon dashboard=true in "newest-cni-134733"
	I0108 13:48:46.526366   20977 addons.go:227] Setting addon metrics-server=true in "newest-cni-134733"
	W0108 13:48:46.526373   20977 addons.go:236] addon storage-provisioner should already be in state true
	W0108 13:48:46.526384   20977 addons.go:236] addon dashboard should already be in state true
	W0108 13:48:46.526395   20977 addons.go:236] addon metrics-server should already be in state true
	I0108 13:48:46.526396   20977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 13:48:46.526383   20977 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-134733"
	I0108 13:48:46.526513   20977 host.go:66] Checking if "newest-cni-134733" exists ...
	I0108 13:48:46.526512   20977 host.go:66] Checking if "newest-cni-134733" exists ...
	I0108 13:48:46.526534   20977 host.go:66] Checking if "newest-cni-134733" exists ...
	I0108 13:48:46.527217   20977 cli_runner.go:164] Run: docker container inspect newest-cni-134733 --format={{.State.Status}}
	I0108 13:48:46.527410   20977 cli_runner.go:164] Run: docker container inspect newest-cni-134733 --format={{.State.Status}}
	I0108 13:48:46.528592   20977 cli_runner.go:164] Run: docker container inspect newest-cni-134733 --format={{.State.Status}}
	I0108 13:48:46.528927   20977 cli_runner.go:164] Run: docker container inspect newest-cni-134733 --format={{.State.Status}}
	I0108 13:48:46.637858   20977 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0108 13:48:46.659304   20977 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 13:48:46.696152   20977 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0108 13:48:46.656493   20977 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0108 13:48:46.656567   20977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-134733
	I0108 13:48:46.661343   20977 addons.go:227] Setting addon default-storageclass=true in "newest-cni-134733"
	I0108 13:48:46.719595   20977 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	W0108 13:48:46.761212   20977 addons.go:236] addon default-storageclass should already be in state true
	I0108 13:48:46.798326   20977 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0108 13:48:46.761220   20977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 13:48:46.761321   20977 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 13:48:46.798423   20977 host.go:66] Checking if "newest-cni-134733" exists ...
	I0108 13:48:46.836837   20977 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0108 13:48:46.836856   20977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 13:48:46.836862   20977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0108 13:48:46.837100   20977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-134733
	I0108 13:48:46.837167   20977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-134733
	I0108 13:48:46.837192   20977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-134733
	I0108 13:48:46.842216   20977 cli_runner.go:164] Run: docker container inspect newest-cni-134733 --format={{.State.Status}}
	I0108 13:48:46.853098   20977 api_server.go:51] waiting for apiserver process to appear ...
	I0108 13:48:46.853199   20977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 13:48:46.873071   20977 api_server.go:71] duration metric: took 410.916772ms to wait for apiserver process to appear ...
	I0108 13:48:46.873112   20977 api_server.go:87] waiting for apiserver healthz status ...
	I0108 13:48:46.873141   20977 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:55274/healthz ...
	I0108 13:48:46.883213   20977 api_server.go:278] https://127.0.0.1:55274/healthz returned 200:
	ok
	I0108 13:48:46.885631   20977 api_server.go:140] control plane version: v1.25.3
	I0108 13:48:46.885650   20977 api_server.go:130] duration metric: took 12.527809ms to wait for apiserver health ...
	I0108 13:48:46.885661   20977 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 13:48:46.894565   20977 system_pods.go:59] 8 kube-system pods found
	I0108 13:48:46.894594   20977 system_pods.go:61] "coredns-565d847f94-g6qsh" [076ac27f-b652-4d28-9197-b128a2b49d25] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 13:48:46.894603   20977 system_pods.go:61] "etcd-newest-cni-134733" [f9cf22af-97f2-4321-8dd6-2e15fbf0015e] Running
	I0108 13:48:46.894618   20977 system_pods.go:61] "kube-apiserver-newest-cni-134733" [d0116b93-577a-4b17-947b-a713e5c14a32] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 13:48:46.894630   20977 system_pods.go:61] "kube-controller-manager-newest-cni-134733" [6e25bf06-327b-42a3-9041-0820b56ad83d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 13:48:46.894644   20977 system_pods.go:61] "kube-proxy-hhnrj" [9c21f333-fa2b-4c52-b85f-2c1ab9160293] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 13:48:46.894654   20977 system_pods.go:61] "kube-scheduler-newest-cni-134733" [21e9ce4f-5437-46c3-9674-d146ae26ea05] Running
	I0108 13:48:46.894664   20977 system_pods.go:61] "metrics-server-5c8fd5cf8-pkv86" [cdc0296f-59fc-4b34-865a-e8dfb1851191] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 13:48:46.894672   20977 system_pods.go:61] "storage-provisioner" [e7891493-a9ec-45f5-9540-784ceab1b93c] Running
	I0108 13:48:46.894678   20977 system_pods.go:74] duration metric: took 9.0116ms to wait for pod list to return data ...
	I0108 13:48:46.894688   20977 default_sa.go:34] waiting for default service account to be created ...
	I0108 13:48:46.898995   20977 default_sa.go:45] found service account: "default"
	I0108 13:48:46.899025   20977 default_sa.go:55] duration metric: took 4.327062ms for default service account to be created ...
	I0108 13:48:46.899046   20977 kubeadm.go:573] duration metric: took 436.892639ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0108 13:48:46.899068   20977 node_conditions.go:102] verifying NodePressure condition ...
	I0108 13:48:46.903508   20977 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0108 13:48:46.903526   20977 node_conditions.go:123] node cpu capacity is 6
	I0108 13:48:46.903542   20977 node_conditions.go:105] duration metric: took 4.462577ms to run NodePressure ...
	I0108 13:48:46.903553   20977 start.go:217] waiting for startup goroutines ...
	I0108 13:48:46.935611   20977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55270 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/newest-cni-134733/id_rsa Username:docker}
	I0108 13:48:46.937835   20977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55270 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/newest-cni-134733/id_rsa Username:docker}
	I0108 13:48:46.940995   20977 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 13:48:46.941010   20977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 13:48:46.941159   20977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-134733
	I0108 13:48:46.941228   20977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55270 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/newest-cni-134733/id_rsa Username:docker}
	I0108 13:48:47.014030   20977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55270 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/newest-cni-134733/id_rsa Username:docker}
	I0108 13:48:47.069759   20977 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 13:48:47.069772   20977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0108 13:48:47.070391   20977 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0108 13:48:47.070403   20977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0108 13:48:47.073296   20977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 13:48:47.154042   20977 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 13:48:47.154069   20977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 13:48:47.154074   20977 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0108 13:48:47.154099   20977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0108 13:48:47.174330   20977 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 13:48:47.174349   20977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 13:48:47.175479   20977 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0108 13:48:47.175491   20977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0108 13:48:47.253214   20977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 13:48:47.253955   20977 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0108 13:48:47.253966   20977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0108 13:48:47.270567   20977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 13:48:47.279679   20977 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0108 13:48:47.279696   20977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0108 13:48:47.362188   20977 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0108 13:48:47.362205   20977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0108 13:48:47.458973   20977 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0108 13:48:47.458995   20977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0108 13:48:47.486860   20977 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0108 13:48:47.486882   20977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0108 13:48:47.565369   20977 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 13:48:47.565391   20977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0108 13:48:47.582793   20977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0108 13:48:48.593395   20977 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.52005703s)
	I0108 13:48:48.666337   20977 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.413085612s)
	I0108 13:48:48.666368   20977 addons.go:457] Verifying addon metrics-server=true in "newest-cni-134733"
	I0108 13:48:48.666373   20977 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.395765497s)
	I0108 13:48:48.875692   20977 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.292862765s)
	I0108 13:48:48.901050   20977 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-134733 addons enable metrics-server	
	
	
	I0108 13:48:48.922211   20977 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0108 13:48:48.943114   20977 addons.go:488] enableAddons completed in 2.480934489s
	I0108 13:48:48.965029   20977 ssh_runner.go:195] Run: rm -f paused
	I0108 13:48:49.009563   20977 start.go:536] kubectl: 1.25.2, cluster: 1.25.3 (minor skew: 0)
	I0108 13:48:49.033377   20977 out.go:177] * Done! kubectl is now configured to use "newest-cni-134733" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Sun 2023-01-08 21:28:12 UTC, end at Sun 2023-01-08 21:55:07 UTC. --
	Jan 08 21:28:15 old-k8s-version-132223 systemd[1]: Stopping Docker Application Container Engine...
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[128]: time="2023-01-08T21:28:15.083689881Z" level=info msg="Processing signal 'terminated'"
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[128]: time="2023-01-08T21:28:15.084513971Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[128]: time="2023-01-08T21:28:15.084732043Z" level=info msg="Daemon shutdown complete"
	Jan 08 21:28:15 old-k8s-version-132223 systemd[1]: docker.service: Succeeded.
	Jan 08 21:28:15 old-k8s-version-132223 systemd[1]: Stopped Docker Application Container Engine.
	Jan 08 21:28:15 old-k8s-version-132223 systemd[1]: Starting Docker Application Container Engine...
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.137878168Z" level=info msg="Starting up"
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.139628557Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.139673949Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.139695987Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.139707659Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.141213135Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.141257062Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.141279605Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.141290776Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.146303293Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.150605267Z" level=info msg="Loading containers: start."
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.229829971Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.260962319Z" level=info msg="Loading containers: done."
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.269713094Z" level=info msg="Docker daemon" commit=3056208 graphdriver(s)=overlay2 version=20.10.21
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.269774718Z" level=info msg="Daemon has completed initialization"
	Jan 08 21:28:15 old-k8s-version-132223 systemd[1]: Started Docker Application Container Engine.
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.295857338Z" level=info msg="API listen on [::]:2376"
	Jan 08 21:28:15 old-k8s-version-132223 dockerd[423]: time="2023-01-08T21:28:15.298848948Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2023-01-08T21:55:09Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  21:55:10 up  1:54,  0 users,  load average: 0.03, 0.38, 0.76
	Linux old-k8s-version-132223 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sun 2023-01-08 21:28:12 UTC, end at Sun 2023-01-08 21:55:10 UTC. --
	Jan 08 21:55:08 old-k8s-version-132223 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 08 21:55:09 old-k8s-version-132223 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1666.
	Jan 08 21:55:09 old-k8s-version-132223 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 08 21:55:09 old-k8s-version-132223 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 08 21:55:09 old-k8s-version-132223 kubelet[34682]: I0108 21:55:09.444503   34682 server.go:410] Version: v1.16.0
	Jan 08 21:55:09 old-k8s-version-132223 kubelet[34682]: I0108 21:55:09.444779   34682 plugins.go:100] No cloud provider specified.
	Jan 08 21:55:09 old-k8s-version-132223 kubelet[34682]: I0108 21:55:09.444789   34682 server.go:773] Client rotation is on, will bootstrap in background
	Jan 08 21:55:09 old-k8s-version-132223 kubelet[34682]: I0108 21:55:09.446601   34682 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 08 21:55:09 old-k8s-version-132223 kubelet[34682]: W0108 21:55:09.447369   34682 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 08 21:55:09 old-k8s-version-132223 kubelet[34682]: W0108 21:55:09.447438   34682 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 08 21:55:09 old-k8s-version-132223 kubelet[34682]: F0108 21:55:09.447479   34682 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 08 21:55:09 old-k8s-version-132223 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 08 21:55:09 old-k8s-version-132223 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 08 21:55:10 old-k8s-version-132223 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1667.
	Jan 08 21:55:10 old-k8s-version-132223 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 08 21:55:10 old-k8s-version-132223 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 08 21:55:10 old-k8s-version-132223 kubelet[34710]: I0108 21:55:10.176194   34710 server.go:410] Version: v1.16.0
	Jan 08 21:55:10 old-k8s-version-132223 kubelet[34710]: I0108 21:55:10.176463   34710 plugins.go:100] No cloud provider specified.
	Jan 08 21:55:10 old-k8s-version-132223 kubelet[34710]: I0108 21:55:10.176503   34710 server.go:773] Client rotation is on, will bootstrap in background
	Jan 08 21:55:10 old-k8s-version-132223 kubelet[34710]: I0108 21:55:10.178302   34710 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 08 21:55:10 old-k8s-version-132223 kubelet[34710]: W0108 21:55:10.179128   34710 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 08 21:55:10 old-k8s-version-132223 kubelet[34710]: W0108 21:55:10.179208   34710 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 08 21:55:10 old-k8s-version-132223 kubelet[34710]: F0108 21:55:10.179243   34710 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 08 21:55:10 old-k8s-version-132223 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 08 21:55:10 old-k8s-version-132223 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 13:55:10.141837   21683 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-132223 -n old-k8s-version-132223
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-132223 -n old-k8s-version-132223: exit status 2 (398.347831ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-132223" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.75s)

                                                
                                    

Test pass (261/295)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 18.01
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.3
10 TestDownloadOnly/v1.25.3/json-events 10.94
11 TestDownloadOnly/v1.25.3/preload-exists 0
14 TestDownloadOnly/v1.25.3/kubectl 0
15 TestDownloadOnly/v1.25.3/LogsDuration 0.3
16 TestDownloadOnly/DeleteAll 0.67
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.39
18 TestDownloadOnlyKic 11.23
19 TestBinaryMirror 1.69
20 TestOffline 48.41
22 TestAddons/Setup 152.65
26 TestAddons/parallel/MetricsServer 5.59
27 TestAddons/parallel/HelmTiller 13.51
29 TestAddons/parallel/CSI 46.65
30 TestAddons/parallel/Headlamp 12.26
31 TestAddons/parallel/CloudSpanner 5.47
34 TestAddons/serial/GCPAuth/Namespaces 0.1
35 TestAddons/StoppedEnableDisable 12.88
36 TestCertOptions 32.87
37 TestCertExpiration 240.96
38 TestDockerFlags 35.12
39 TestForceSystemdFlag 33.99
40 TestForceSystemdEnv 34.08
42 TestHyperKitDriverInstallOrUpdate 7.78
45 TestErrorSpam/setup 29.31
46 TestErrorSpam/start 2.26
47 TestErrorSpam/status 1.27
48 TestErrorSpam/pause 1.83
49 TestErrorSpam/unpause 1.98
50 TestErrorSpam/stop 12.97
53 TestFunctional/serial/CopySyncFile 0
54 TestFunctional/serial/StartWithProxy 45.62
55 TestFunctional/serial/AuditLog 0
56 TestFunctional/serial/SoftStart 40.31
57 TestFunctional/serial/KubeContext 0.04
58 TestFunctional/serial/KubectlGetPods 0.08
61 TestFunctional/serial/CacheCmd/cache/add_remote 8.63
62 TestFunctional/serial/CacheCmd/cache/add_local 1.72
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.08
64 TestFunctional/serial/CacheCmd/cache/list 0.08
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.42
66 TestFunctional/serial/CacheCmd/cache/cache_reload 2.88
67 TestFunctional/serial/CacheCmd/cache/delete 0.16
68 TestFunctional/serial/MinikubeKubectlCmd 0.51
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.69
70 TestFunctional/serial/ExtraConfig 42.82
71 TestFunctional/serial/ComponentHealth 0.06
72 TestFunctional/serial/LogsCmd 3.1
73 TestFunctional/serial/LogsFileCmd 3.19
75 TestFunctional/parallel/ConfigCmd 0.51
76 TestFunctional/parallel/DashboardCmd 17.75
77 TestFunctional/parallel/DryRun 1.68
78 TestFunctional/parallel/InternationalLanguage 0.74
79 TestFunctional/parallel/StatusCmd 1.29
82 TestFunctional/parallel/ServiceCmd 13.2
84 TestFunctional/parallel/AddonsCmd 0.33
85 TestFunctional/parallel/PersistentVolumeClaim 29.77
87 TestFunctional/parallel/SSHCmd 0.86
88 TestFunctional/parallel/CpCmd 1.78
89 TestFunctional/parallel/MySQL 31.92
90 TestFunctional/parallel/FileSync 0.5
91 TestFunctional/parallel/CertSync 2.77
95 TestFunctional/parallel/NodeLabels 0.05
97 TestFunctional/parallel/NonActiveRuntimeDisabled 0.45
99 TestFunctional/parallel/License 0.74
101 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
103 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.2
104 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
105 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
109 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
110 TestFunctional/parallel/ProfileCmd/profile_not_create 0.56
111 TestFunctional/parallel/ProfileCmd/profile_list 0.51
112 TestFunctional/parallel/ProfileCmd/profile_json_output 0.55
113 TestFunctional/parallel/MountCmd/any-port 10.71
114 TestFunctional/parallel/MountCmd/specific-port 2.8
115 TestFunctional/parallel/Version/short 0.12
116 TestFunctional/parallel/Version/components 1.01
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.4
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.4
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.4
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.53
121 TestFunctional/parallel/ImageCommands/ImageBuild 6.12
122 TestFunctional/parallel/ImageCommands/Setup 3.92
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.71
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.74
125 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.39
126 TestFunctional/parallel/DockerEnv/bash 1.82
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.39
128 TestFunctional/parallel/UpdateContextCmd/no_changes 0.34
129 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.51
130 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.37
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.71
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.85
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 3.52
134 TestFunctional/delete_addon-resizer_images 0.15
135 TestFunctional/delete_my-image_image 0.06
136 TestFunctional/delete_minikube_cached_images 0.06
146 TestJSONOutput/start/Command 52.78
147 TestJSONOutput/start/Audit 0
149 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
150 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
152 TestJSONOutput/pause/Command 0.64
153 TestJSONOutput/pause/Audit 0
155 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
156 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
158 TestJSONOutput/unpause/Command 0.6
159 TestJSONOutput/unpause/Audit 0
161 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
162 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
164 TestJSONOutput/stop/Command 12.27
165 TestJSONOutput/stop/Audit 0
167 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
168 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
169 TestErrorJSONOutput 0.73
171 TestKicCustomNetwork/create_custom_network 31.29
172 TestKicCustomNetwork/use_default_bridge_network 32.28
173 TestKicExistingNetwork 31.24
174 TestKicCustomSubnet 32.33
175 TestMainNoArgs 0.08
176 TestMinikubeProfile 65.7
179 TestMountStart/serial/StartWithMountFirst 7.24
180 TestMountStart/serial/VerifyMountFirst 0.4
181 TestMountStart/serial/StartWithMountSecond 7.38
182 TestMountStart/serial/VerifyMountSecond 0.4
183 TestMountStart/serial/DeleteFirst 2.14
184 TestMountStart/serial/VerifyMountPostDelete 0.42
185 TestMountStart/serial/Stop 1.57
186 TestMountStart/serial/RestartStopped 5.28
187 TestMountStart/serial/VerifyMountPostStop 0.4
190 TestMultiNode/serial/FreshStart2Nodes 99.64
191 TestMultiNode/serial/DeployApp2Nodes 5.97
192 TestMultiNode/serial/PingHostFrom2Pods 0.92
193 TestMultiNode/serial/AddNode 27.74
194 TestMultiNode/serial/ProfileList 0.44
195 TestMultiNode/serial/CopyFile 15.01
196 TestMultiNode/serial/StopNode 13.82
197 TestMultiNode/serial/StartAfterStop 19.49
199 TestMultiNode/serial/DeleteNode 7.81
200 TestMultiNode/serial/StopMultiNode 24.82
201 TestMultiNode/serial/RestartMultiNode 75.59
202 TestMultiNode/serial/ValidateNameConflict 33.42
206 TestPreload 194.15
208 TestScheduledStopUnix 103.54
209 TestSkaffold 68.07
211 TestInsufficientStorage 14.52
227 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 10.26
228 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 14.2
229 TestStoppedBinaryUpgrade/Setup 1.67
231 TestStoppedBinaryUpgrade/MinikubeLogs 3.6
233 TestPause/serial/Start 42.69
234 TestPause/serial/SecondStartNoReconfiguration 47.73
235 TestPause/serial/Pause 0.73
236 TestPause/serial/VerifyStatus 0.42
237 TestPause/serial/Unpause 0.7
238 TestPause/serial/PauseAgain 0.76
239 TestPause/serial/DeletePaused 2.64
240 TestPause/serial/VerifyDeletedResources 0.58
249 TestNoKubernetes/serial/StartNoK8sWithVersion 0.4
250 TestNoKubernetes/serial/StartWithK8s 29.69
251 TestNoKubernetes/serial/StartWithStopK8s 18.61
252 TestNoKubernetes/serial/Start 6.72
253 TestNoKubernetes/serial/VerifyK8sNotRunning 0.39
254 TestNoKubernetes/serial/ProfileList 17.03
255 TestNoKubernetes/serial/Stop 1.62
256 TestNoKubernetes/serial/StartNoArgs 4.28
257 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.38
258 TestNetworkPlugins/group/auto/Start 46.35
259 TestNetworkPlugins/group/auto/KubeletFlags 0.41
260 TestNetworkPlugins/group/auto/NetCatPod 13.22
261 TestNetworkPlugins/group/auto/DNS 0.12
262 TestNetworkPlugins/group/auto/Localhost 0.12
263 TestNetworkPlugins/group/auto/HairPin 5.13
264 TestNetworkPlugins/group/kindnet/Start 62.41
265 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
266 TestNetworkPlugins/group/kindnet/KubeletFlags 0.42
267 TestNetworkPlugins/group/kindnet/NetCatPod 14.21
268 TestNetworkPlugins/group/kindnet/DNS 0.12
269 TestNetworkPlugins/group/kindnet/Localhost 0.11
270 TestNetworkPlugins/group/kindnet/HairPin 0.12
271 TestNetworkPlugins/group/enable-default-cni/Start 47.01
272 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.42
273 TestNetworkPlugins/group/enable-default-cni/NetCatPod 14.24
274 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
275 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
276 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
277 TestNetworkPlugins/group/false/Start 46.15
278 TestNetworkPlugins/group/false/KubeletFlags 0.43
279 TestNetworkPlugins/group/false/NetCatPod 15.21
280 TestNetworkPlugins/group/bridge/Start 48.56
281 TestNetworkPlugins/group/false/DNS 0.14
282 TestNetworkPlugins/group/false/Localhost 0.15
283 TestNetworkPlugins/group/false/HairPin 5.12
284 TestNetworkPlugins/group/kubenet/Start 46.44
285 TestNetworkPlugins/group/bridge/KubeletFlags 0.41
286 TestNetworkPlugins/group/bridge/NetCatPod 15.2
287 TestNetworkPlugins/group/kubenet/KubeletFlags 0.46
288 TestNetworkPlugins/group/kubenet/NetCatPod 13.26
289 TestNetworkPlugins/group/bridge/DNS 0.12
290 TestNetworkPlugins/group/bridge/Localhost 0.12
291 TestNetworkPlugins/group/bridge/HairPin 0.12
292 TestNetworkPlugins/group/cilium/Start 97.85
293 TestNetworkPlugins/group/kubenet/DNS 0.13
294 TestNetworkPlugins/group/kubenet/Localhost 0.13
296 TestNetworkPlugins/group/calico/Start 329.79
297 TestNetworkPlugins/group/cilium/ControllerPod 5.02
298 TestNetworkPlugins/group/cilium/KubeletFlags 0.42
299 TestNetworkPlugins/group/cilium/NetCatPod 15.61
300 TestNetworkPlugins/group/cilium/DNS 0.14
301 TestNetworkPlugins/group/cilium/Localhost 0.13
302 TestNetworkPlugins/group/cilium/HairPin 0.11
307 TestNetworkPlugins/group/calico/ControllerPod 5.02
308 TestNetworkPlugins/group/calico/KubeletFlags 0.41
309 TestNetworkPlugins/group/calico/NetCatPod 14.22
310 TestNetworkPlugins/group/calico/DNS 0.12
311 TestNetworkPlugins/group/calico/Localhost 0.11
312 TestNetworkPlugins/group/calico/HairPin 0.11
314 TestStartStop/group/no-preload/serial/FirstStart 56.43
315 TestStartStop/group/old-k8s-version/serial/Stop 1.68
316 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.46
318 TestStartStop/group/no-preload/serial/DeployApp 10.3
319 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.98
320 TestStartStop/group/no-preload/serial/Stop 12.4
321 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.39
322 TestStartStop/group/no-preload/serial/SecondStart 303.51
323 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 20.02
324 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
325 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.45
326 TestStartStop/group/no-preload/serial/Pause 3.43
328 TestStartStop/group/embed-certs/serial/FirstStart 45.74
329 TestStartStop/group/embed-certs/serial/DeployApp 9.27
330 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.87
331 TestStartStop/group/embed-certs/serial/Stop 12.43
332 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.4
333 TestStartStop/group/embed-certs/serial/SecondStart 301.76
335 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 20.03
336 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
337 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.44
338 TestStartStop/group/embed-certs/serial/Pause 3.4
340 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 45.77
341 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.28
342 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.95
343 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.48
344 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.39
345 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 296.11
347 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 19.02
348 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
349 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.45
350 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.32
352 TestStartStop/group/newest-cni/serial/FirstStart 42.8
353 TestStartStop/group/newest-cni/serial/DeployApp 0
354 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.9
355 TestStartStop/group/newest-cni/serial/Stop 12.52
356 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.39
357 TestStartStop/group/newest-cni/serial/SecondStart 19.34
358 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
359 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
360 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.53
361 TestStartStop/group/newest-cni/serial/Pause 3.38
x
+
TestDownloadOnly/v1.16.0/json-events (18.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-122642 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-122642 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (18.008496202s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (18.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-122642
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-122642: exit status 85 (299.410766ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-122642 | jenkins | v1.28.0 | 08 Jan 23 12:26 PST |          |
	|         | -p download-only-122642        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/08 12:26:43
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.19.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 12:26:43.033543    4087 out.go:296] Setting OutFile to fd 1 ...
	I0108 12:26:43.033707    4087 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 12:26:43.033712    4087 out.go:309] Setting ErrFile to fd 2...
	I0108 12:26:43.033716    4087 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 12:26:43.033838    4087 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-2761/.minikube/bin
	W0108 12:26:43.033940    4087 root.go:311] Error reading config file at /Users/jenkins/minikube-integration/15565-2761/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15565-2761/.minikube/config/config.json: no such file or directory
	I0108 12:26:43.034693    4087 out.go:303] Setting JSON to true
	I0108 12:26:43.054149    4087 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1576,"bootTime":1673208027,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0108 12:26:43.054250    4087 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0108 12:26:43.076118    4087 out.go:97] [download-only-122642] minikube v1.28.0 on Darwin 13.0.1
	W0108 12:26:43.076385    4087 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball: no such file or directory
	I0108 12:26:43.076410    4087 notify.go:220] Checking for updates...
	I0108 12:26:43.098002    4087 out.go:169] MINIKUBE_LOCATION=15565
	I0108 12:26:43.120222    4087 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15565-2761/kubeconfig
	I0108 12:26:43.142002    4087 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0108 12:26:43.163197    4087 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 12:26:43.185141    4087 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube
	W0108 12:26:43.227982    4087 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 12:26:43.228429    4087 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 12:26:43.288626    4087 docker.go:137] docker version: linux-20.10.21
	I0108 12:26:43.288746    4087 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 12:26:43.427820    4087 info.go:266] docker info: {ID:5ZGC:6Z7C:CMVQ:QZDS:ZWTS:M4CQ:373D:RAJX:R4QB:QI3D:27UQ:FR5B Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:44 SystemTime:2023-01-08 20:26:43.337869283 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0108 12:26:43.449831    4087 out.go:97] Using the docker driver based on user configuration
	I0108 12:26:43.449882    4087 start.go:294] selected driver: docker
	I0108 12:26:43.449896    4087 start.go:838] validating driver "docker" against <nil>
	I0108 12:26:43.450171    4087 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 12:26:43.591124    4087 info.go:266] docker info: {ID:5ZGC:6Z7C:CMVQ:QZDS:ZWTS:M4CQ:373D:RAJX:R4QB:QI3D:27UQ:FR5B Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:44 SystemTime:2023-01-08 20:26:43.500896304 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0108 12:26:43.591217    4087 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I0108 12:26:43.596197    4087 start_flags.go:384] Using suggested 5895MB memory alloc based on sys=32768MB, container=5943MB
	I0108 12:26:43.596304    4087 start_flags.go:892] Wait components to verify : map[apiserver:true system_pods:true]
	I0108 12:26:43.617631    4087 out.go:169] Using Docker Desktop driver with root privileges
	I0108 12:26:43.639007    4087 cni.go:95] Creating CNI manager for ""
	I0108 12:26:43.639036    4087 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0108 12:26:43.639055    4087 start_flags.go:317] config:
	{Name:download-only-122642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-122642 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 12:26:43.660858    4087 out.go:97] Starting control plane node download-only-122642 in cluster download-only-122642
	I0108 12:26:43.660967    4087 cache.go:120] Beginning downloading kic base image for docker with docker
	I0108 12:26:43.682742    4087 out.go:97] Pulling base image ...
	I0108 12:26:43.682859    4087 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0108 12:26:43.682963    4087 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 12:26:43.736791    4087 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c to local cache
	I0108 12:26:43.737052    4087 image.go:60] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local cache directory
	I0108 12:26:43.737190    4087 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c to local cache
	I0108 12:26:43.789473    4087 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0108 12:26:43.789491    4087 cache.go:57] Caching tarball of preloaded images
	I0108 12:26:43.789675    4087 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0108 12:26:43.810907    4087 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0108 12:26:43.810948    4087 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0108 12:26:44.047187    4087 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0108 12:26:50.662305    4087 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0108 12:26:50.662456    4087 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0108 12:26:51.207127    4087 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0108 12:26:51.207343    4087 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/download-only-122642/config.json ...
	I0108 12:26:51.207375    4087 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/download-only-122642/config.json: {Name:mkc65a765b1ac37d5fa7b8062c3758afb0e3d456 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 12:26:51.207685    4087 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0108 12:26:51.207946    4087 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-122642"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/json-events (10.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-122642 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-122642 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=docker --driver=docker : (10.940570208s)
--- PASS: TestDownloadOnly/v1.25.3/json-events (10.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/preload-exists
--- PASS: TestDownloadOnly/v1.25.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/kubectl
--- PASS: TestDownloadOnly/v1.25.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-122642
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-122642: exit status 85 (299.18649ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-122642 | jenkins | v1.28.0 | 08 Jan 23 12:26 PST |          |
	|         | -p download-only-122642        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-122642 | jenkins | v1.28.0 | 08 Jan 23 12:27 PST |          |
	|         | -p download-only-122642        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.25.3   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/08 12:27:01
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.19.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 12:27:01.345629    4132 out.go:296] Setting OutFile to fd 1 ...
	I0108 12:27:01.345800    4132 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 12:27:01.345806    4132 out.go:309] Setting ErrFile to fd 2...
	I0108 12:27:01.345811    4132 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 12:27:01.345921    4132 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-2761/.minikube/bin
	W0108 12:27:01.346025    4132 root.go:311] Error reading config file at /Users/jenkins/minikube-integration/15565-2761/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15565-2761/.minikube/config/config.json: no such file or directory
	I0108 12:27:01.346419    4132 out.go:303] Setting JSON to true
	I0108 12:27:01.365358    4132 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1594,"bootTime":1673208027,"procs":409,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0108 12:27:01.365459    4132 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0108 12:27:01.387767    4132 out.go:97] [download-only-122642] minikube v1.28.0 on Darwin 13.0.1
	I0108 12:27:01.387936    4132 notify.go:220] Checking for updates...
	I0108 12:27:01.409831    4132 out.go:169] MINIKUBE_LOCATION=15565
	I0108 12:27:01.453459    4132 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15565-2761/kubeconfig
	I0108 12:27:01.475038    4132 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0108 12:27:01.497069    4132 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 12:27:01.518497    4132 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube
	W0108 12:27:01.560820    4132 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 12:27:01.561605    4132 config.go:180] Loaded profile config "download-only-122642": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0108 12:27:01.561696    4132 start.go:746] api.Load failed for download-only-122642: filestore "download-only-122642": Docker machine "download-only-122642" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 12:27:01.561783    4132 driver.go:365] Setting default libvirt URI to qemu:///system
	W0108 12:27:01.561828    4132 start.go:746] api.Load failed for download-only-122642: filestore "download-only-122642": Docker machine "download-only-122642" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 12:27:01.620401    4132 docker.go:137] docker version: linux-20.10.21
	I0108 12:27:01.620515    4132 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 12:27:01.758419    4132 info.go:266] docker info: {ID:5ZGC:6Z7C:CMVQ:QZDS:ZWTS:M4CQ:373D:RAJX:R4QB:QI3D:27UQ:FR5B Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:44 SystemTime:2023-01-08 20:27:01.668660286 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0108 12:27:01.780369    4132 out.go:97] Using the docker driver based on existing profile
	I0108 12:27:01.780406    4132 start.go:294] selected driver: docker
	I0108 12:27:01.780418    4132 start.go:838] validating driver "docker" against &{Name:download-only-122642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-122642 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/so
cket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 12:27:01.780762    4132 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 12:27:01.919185    4132 info.go:266] docker info: {ID:5ZGC:6Z7C:CMVQ:QZDS:ZWTS:M4CQ:373D:RAJX:R4QB:QI3D:27UQ:FR5B Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:44 SystemTime:2023-01-08 20:27:01.830578626 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0108 12:27:01.921651    4132 cni.go:95] Creating CNI manager for ""
	I0108 12:27:01.921670    4132 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0108 12:27:01.921684    4132 start_flags.go:317] config:
	{Name:download-only-122642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:download-only-122642 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 12:27:01.943669    4132 out.go:97] Starting control plane node download-only-122642 in cluster download-only-122642
	I0108 12:27:01.943802    4132 cache.go:120] Beginning downloading kic base image for docker with docker
	I0108 12:27:01.965330    4132 out.go:97] Pulling base image ...
	I0108 12:27:01.965474    4132 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0108 12:27:01.965583    4132 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0108 12:27:02.020020    4132 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c to local cache
	I0108 12:27:02.020204    4132 image.go:60] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local cache directory
	I0108 12:27:02.020230    4132 image.go:63] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local cache directory, skipping pull
	I0108 12:27:02.020235    4132 image.go:102] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in cache, skipping pull
	I0108 12:27:02.020244    4132 cache.go:151] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c as a tarball
	I0108 12:27:02.066945    4132 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I0108 12:27:02.066991    4132 cache.go:57] Caching tarball of preloaded images
	I0108 12:27:02.067392    4132 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0108 12:27:02.089196    4132 out.go:97] Downloading Kubernetes v1.25.3 preload ...
	I0108 12:27:02.089266    4132 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 ...
	I0108 12:27:02.318587    4132 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4?checksum=md5:624cb874287e7e3d793b79e4205a7f98 -> /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I0108 12:27:09.509836    4132 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 ...
	I0108 12:27:09.509997    4132 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 ...
	I0108 12:27:10.094721    4132 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I0108 12:27:10.094802    4132 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/download-only-122642/config.json ...
	I0108 12:27:10.095190    4132 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0108 12:27:10.095456    4132 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.25.3/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.25.3/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/darwin/amd64/v1.25.3/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-122642"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.25.3/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.67s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.67s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-122642
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.39s)

                                                
                                    
x
+
TestDownloadOnlyKic (11.23s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-122713 --force --alsologtostderr --driver=docker 
aaa_download_only_test.go:228: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p download-docker-122713 --force --alsologtostderr --driver=docker : (10.150504138s)
helpers_test.go:175: Cleaning up "download-docker-122713" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-122713
--- PASS: TestDownloadOnlyKic (11.23s)

                                                
                                    
x
+
TestBinaryMirror (1.69s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-122725 --alsologtostderr --binary-mirror http://127.0.0.1:49476 --driver=docker 
aaa_download_only_test.go:310: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-122725 --alsologtostderr --binary-mirror http://127.0.0.1:49476 --driver=docker : (1.073643459s)
helpers_test.go:175: Cleaning up "binary-mirror-122725" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-122725
--- PASS: TestBinaryMirror (1.69s)

                                                
                                    
x
+
TestOffline (48.41s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-130508 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-130508 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (45.652118764s)
helpers_test.go:175: Cleaning up "offline-docker-130508" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-130508
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-130508: (2.760405181s)
--- PASS: TestOffline (48.41s)

                                                
                                    
x
+
TestAddons/Setup (152.65s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-122726 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p addons-122726 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m32.648174784s)
--- PASS: TestAddons/Setup (152.65s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.59s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:364: metrics-server stabilized in 2.342254ms
addons_test.go:366: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:342: "metrics-server-56c6cfbdd9-6llvk" [f4f2ab4c-6314-43c2-832e-926682a2989c] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:366: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00819734s
addons_test.go:372: (dbg) Run:  kubectl --context addons-122726 top pods -n kube-system
addons_test.go:389: (dbg) Run:  out/minikube-darwin-amd64 -p addons-122726 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.59s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.51s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:413: tiller-deploy stabilized in 2.933074ms
addons_test.go:415: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:342: "tiller-deploy-696b5bfbb7-k9xg7" [a4a26955-f86e-4d50-bb22-3832d9d09c42] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:415: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.008482939s
addons_test.go:430: (dbg) Run:  kubectl --context addons-122726 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:430: (dbg) Done: kubectl --context addons-122726 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.008045897s)
addons_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p addons-122726 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.51s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.65s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:518: csi-hostpath-driver pods stabilized in 5.084546ms
addons_test.go:521: (dbg) Run:  kubectl --context addons-122726 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:526: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-122726 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:531: (dbg) Run:  kubectl --context addons-122726 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:536: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [69534502-cf1e-4910-8f8c-a2218bc68980] Pending
helpers_test.go:342: "task-pv-pod" [69534502-cf1e-4910-8f8c-a2218bc68980] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [69534502-cf1e-4910-8f8c-a2218bc68980] Running
addons_test.go:536: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 25.007483928s
addons_test.go:541: (dbg) Run:  kubectl --context addons-122726 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:546: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-122726 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:417: (dbg) Run:  kubectl --context addons-122726 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:551: (dbg) Run:  kubectl --context addons-122726 delete pod task-pv-pod
addons_test.go:551: (dbg) Done: kubectl --context addons-122726 delete pod task-pv-pod: (1.013037977s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-122726 delete pvc hpvc
addons_test.go:563: (dbg) Run:  kubectl --context addons-122726 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-122726 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-122726 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [ef76463e-4856-42a1-a632-0892a85c4b06] Pending
helpers_test.go:342: "task-pv-pod-restore" [ef76463e-4856-42a1-a632-0892a85c4b06] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:342: "task-pv-pod-restore" [ef76463e-4856-42a1-a632-0892a85c4b06] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.008171506s
addons_test.go:583: (dbg) Run:  kubectl --context addons-122726 delete pod task-pv-pod-restore
addons_test.go:587: (dbg) Run:  kubectl --context addons-122726 delete pvc hpvc-restore
addons_test.go:591: (dbg) Run:  kubectl --context addons-122726 delete volumesnapshot new-snapshot-demo
addons_test.go:595: (dbg) Run:  out/minikube-darwin-amd64 -p addons-122726 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:595: (dbg) Done: out/minikube-darwin-amd64 -p addons-122726 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.886981837s)
addons_test.go:599: (dbg) Run:  out/minikube-darwin-amd64 -p addons-122726 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (46.65s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:774: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-122726 --alsologtostderr -v=1
addons_test.go:774: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-122726 --alsologtostderr -v=1: (2.25117965s)
addons_test.go:779: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:342: "headlamp-764769c887-s7zgm" [83b0be4b-8cb4-4563-8236-a97e9ccc9c39] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-764769c887-s7zgm" [83b0be4b-8cb4-4563-8236-a97e9ccc9c39] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:779: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.008839984s
--- PASS: TestAddons/parallel/Headlamp (12.26s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:795: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:342: "cloud-spanner-emulator-7d7766f55c-4c9dx" [17c5173b-0d3f-4400-abdd-8d4516b6ff88] Running

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:795: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00944045s
addons_test.go:798: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-122726
--- PASS: TestAddons/parallel/CloudSpanner (5.47s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:607: (dbg) Run:  kubectl --context addons-122726 create ns new-namespace
addons_test.go:621: (dbg) Run:  kubectl --context addons-122726 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.88s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:139: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-122726
addons_test.go:139: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-122726: (12.430384935s)
addons_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-122726
addons_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-122726
--- PASS: TestAddons/StoppedEnableDisable (12.88s)

                                                
                                    
x
+
TestCertOptions (32.87s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-130650 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-130650 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (29.283664234s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-130650 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-130650 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-130650" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-130650
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-130650: (2.707748489s)
--- PASS: TestCertOptions (32.87s)

                                                
                                    
x
+
TestCertExpiration (240.96s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-130630 --memory=2048 --cert-expiration=3m --driver=docker 

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-130630 --memory=2048 --cert-expiration=3m --driver=docker : (31.065120046s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-130630 --memory=2048 --cert-expiration=8760h --driver=docker 
E0108 13:10:16.945186    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
E0108 13:10:21.675243    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/skaffold-130345/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-130630 --memory=2048 --cert-expiration=8760h --driver=docker : (27.283430416s)
helpers_test.go:175: Cleaning up "cert-expiration-130630" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-130630
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-130630: (2.605798829s)
--- PASS: TestCertExpiration (240.96s)

                                                
                                    
x
+
TestDockerFlags (35.12s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-130615 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-130615 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (31.634371228s)
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-130615 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-130615 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-130615" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-130615
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-130615: (2.629943647s)
--- PASS: TestDockerFlags (35.12s)

                                                
                                    
x
+
TestForceSystemdFlag (33.99s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-130556 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-130556 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (30.500788747s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-130556 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-130556" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-130556
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-130556: (2.977580376s)
--- PASS: TestForceSystemdFlag (33.99s)

                                                
                                    
x
+
TestForceSystemdEnv (34.08s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-130541 --memory=2048 --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-130541 --memory=2048 --alsologtostderr -v=5 --driver=docker : (30.894597826s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-130541 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-130541" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-130541
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-130541: (2.692027692s)
--- PASS: TestForceSystemdEnv (34.08s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (7.78s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (7.78s)

                                                
                                    
x
+
TestErrorSpam/setup (29.31s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-123153 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-123153 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-123153 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-123153 --driver=docker : (29.306660126s)
--- PASS: TestErrorSpam/setup (29.31s)

                                                
                                    
x
+
TestErrorSpam/start (2.26s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-123153 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-123153 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-123153 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-123153 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-123153 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-123153 start --dry-run
--- PASS: TestErrorSpam/start (2.26s)

                                                
                                    
x
+
TestErrorSpam/status (1.27s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-123153 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-123153 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-123153 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-123153 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-123153 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-123153 status
--- PASS: TestErrorSpam/status (1.27s)

                                                
                                    
x
+
TestErrorSpam/pause (1.83s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-123153 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-123153 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-123153 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-123153 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-123153 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-123153 pause
--- PASS: TestErrorSpam/pause (1.83s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.98s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-123153 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-123153 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-123153 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-123153 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-123153 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-123153 unpause
--- PASS: TestErrorSpam/unpause (1.98s)

                                                
                                    
x
+
TestErrorSpam/stop (12.97s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-123153 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-123153 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-123153 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-123153 stop: (12.327353715s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-123153 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-123153 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-123153 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-123153 stop
--- PASS: TestErrorSpam/stop (12.97s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1782: local sync path: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/test/nested/copy/4083/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (45.62s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2161: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-123245 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2161: (dbg) Done: out/minikube-darwin-amd64 start -p functional-123245 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (45.622137069s)
--- PASS: TestFunctional/serial/StartWithProxy (45.62s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.31s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:652: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-123245 --alsologtostderr -v=8
functional_test.go:652: (dbg) Done: out/minikube-darwin-amd64 start -p functional-123245 --alsologtostderr -v=8: (40.288584654s)
functional_test.go:656: soft start took 40.289239737s for "functional-123245" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.31s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:674: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:689: (dbg) Run:  kubectl --context functional-123245 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (8.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 cache add k8s.gcr.io/pause:3.1
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-123245 cache add k8s.gcr.io/pause:3.1: (3.026782385s)
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 cache add k8s.gcr.io/pause:3.3
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-123245 cache add k8s.gcr.io/pause:3.3: (2.952508484s)
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 cache add k8s.gcr.io/pause:latest
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-123245 cache add k8s.gcr.io/pause:latest: (2.647847006s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (8.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1070: (dbg) Run:  docker build -t minikube-local-cache-test:functional-123245 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialCacheCmdcacheadd_local4281984221/001
functional_test.go:1082: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 cache add minikube-local-cache-test:functional-123245
functional_test.go:1082: (dbg) Done: out/minikube-darwin-amd64 -p functional-123245 cache add minikube-local-cache-test:functional-123245: (1.165845093s)
functional_test.go:1087: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 cache delete minikube-local-cache-test:functional-123245
functional_test.go:1076: (dbg) Run:  docker rmi minikube-local-cache-test:functional-123245
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1095: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1103: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1117: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.88s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1140: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-123245 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (398.457841ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1151: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 cache reload
functional_test.go:1151: (dbg) Done: out/minikube-darwin-amd64 -p functional-123245 cache reload: (1.631081261s)
functional_test.go:1156: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.88s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1165: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1165: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:709: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 kubectl -- --context functional-123245 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.51s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.69s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:734: (dbg) Run:  out/kubectl --context functional-123245 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.69s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.82s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:750: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-123245 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0108 12:34:59.551135    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
E0108 12:34:59.558658    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
E0108 12:34:59.570808    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
E0108 12:34:59.592999    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
E0108 12:34:59.633591    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
E0108 12:34:59.714000    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
E0108 12:34:59.876075    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
E0108 12:35:00.197371    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
E0108 12:35:00.837812    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
E0108 12:35:02.118147    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
E0108 12:35:04.678233    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
functional_test.go:750: (dbg) Done: out/minikube-darwin-amd64 start -p functional-123245 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.817006758s)
functional_test.go:754: restart took 42.817165921s for "functional-123245" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (42.82s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:803: (dbg) Run:  kubectl --context functional-123245 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:818: etcd phase: Running
functional_test.go:828: etcd status: Ready
functional_test.go:818: kube-apiserver phase: Running
functional_test.go:828: kube-apiserver status: Ready
functional_test.go:818: kube-controller-manager phase: Running
functional_test.go:828: kube-controller-manager status: Ready
functional_test.go:818: kube-scheduler phase: Running
functional_test.go:828: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1229: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 logs
E0108 12:35:09.798498    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
functional_test.go:1229: (dbg) Done: out/minikube-darwin-amd64 -p functional-123245 logs: (3.102545467s)
--- PASS: TestFunctional/serial/LogsCmd (3.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.19s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd3563343340/001/logs.txt
functional_test.go:1243: (dbg) Done: out/minikube-darwin-amd64 -p functional-123245 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd3563343340/001/logs.txt: (3.184953013s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.19s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-123245 config get cpus: exit status 14 (61.468541ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 config set cpus 2
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 config get cpus
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-123245 config get cpus: exit status 14 (61.150016ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (17.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:898: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-123245 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:903: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-123245 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 6431: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (17.75s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-123245 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-123245 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (760.202673ms)

                                                
                                                
-- stdout --
	* [functional-123245] minikube v1.28.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2761/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 12:35:58.038288    6310 out.go:296] Setting OutFile to fd 1 ...
	I0108 12:35:58.038460    6310 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 12:35:58.038466    6310 out.go:309] Setting ErrFile to fd 2...
	I0108 12:35:58.038470    6310 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 12:35:58.038582    6310 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-2761/.minikube/bin
	I0108 12:35:58.039073    6310 out.go:303] Setting JSON to false
	I0108 12:35:58.060306    6310 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":2131,"bootTime":1673208027,"procs":413,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0108 12:35:58.060492    6310 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0108 12:35:58.082470    6310 out.go:177] * [functional-123245] minikube v1.28.0 on Darwin 13.0.1
	I0108 12:35:58.125391    6310 notify.go:220] Checking for updates...
	I0108 12:35:58.147102    6310 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 12:35:58.189473    6310 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2761/kubeconfig
	I0108 12:35:58.232247    6310 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0108 12:35:58.273991    6310 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 12:35:58.316266    6310 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube
	I0108 12:35:58.338782    6310 config.go:180] Loaded profile config "functional-123245": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 12:35:58.339451    6310 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 12:35:58.408361    6310 docker.go:137] docker version: linux-20.10.21
	I0108 12:35:58.408513    6310 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 12:35:58.558519    6310 info.go:266] docker info: {ID:5ZGC:6Z7C:CMVQ:QZDS:ZWTS:M4CQ:373D:RAJX:R4QB:QI3D:27UQ:FR5B Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-08 20:35:58.462404762 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0108 12:35:58.602958    6310 out.go:177] * Using the docker driver based on existing profile
	I0108 12:35:58.623944    6310 start.go:294] selected driver: docker
	I0108 12:35:58.623981    6310 start.go:838] validating driver "docker" against &{Name:functional-123245 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-123245 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:fals
e portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 12:35:58.624148    6310 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 12:35:58.649940    6310 out.go:177] 
	W0108 12:35:58.670975    6310 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0108 12:35:58.692002    6310 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:984: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-123245 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-123245 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-123245 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (736.990628ms)

                                                
                                                
-- stdout --
	* [functional-123245] minikube v1.28.0 sur Darwin 13.0.1
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2761/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 12:35:57.296655    6283 out.go:296] Setting OutFile to fd 1 ...
	I0108 12:35:57.296835    6283 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 12:35:57.296842    6283 out.go:309] Setting ErrFile to fd 2...
	I0108 12:35:57.296847    6283 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 12:35:57.296974    6283 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-2761/.minikube/bin
	I0108 12:35:57.297464    6283 out.go:303] Setting JSON to false
	I0108 12:35:57.318560    6283 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":2130,"bootTime":1673208027,"procs":412,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0108 12:35:57.318669    6283 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0108 12:35:57.341666    6283 out.go:177] * [functional-123245] minikube v1.28.0 sur Darwin 13.0.1
	I0108 12:35:57.365002    6283 notify.go:220] Checking for updates...
	I0108 12:35:57.386390    6283 out.go:177]   - MINIKUBE_LOCATION=15565
	I0108 12:35:57.407824    6283 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2761/kubeconfig
	I0108 12:35:57.449441    6283 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0108 12:35:57.491520    6283 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 12:35:57.533618    6283 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube
	I0108 12:35:57.555477    6283 config.go:180] Loaded profile config "functional-123245": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 12:35:57.556176    6283 driver.go:365] Setting default libvirt URI to qemu:///system
	I0108 12:35:57.623919    6283 docker.go:137] docker version: linux-20.10.21
	I0108 12:35:57.624061    6283 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 12:35:57.781389    6283 info.go:266] docker info: {ID:5ZGC:6Z7C:CMVQ:QZDS:ZWTS:M4CQ:373D:RAJX:R4QB:QI3D:27UQ:FR5B Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-08 20:35:57.680652321 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0108 12:35:57.824117    6283 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0108 12:35:57.845937    6283 start.go:294] selected driver: docker
	I0108 12:35:57.845958    6283 start.go:838] validating driver "docker" against &{Name:functional-123245 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-123245 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:fals
e portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0108 12:35:57.846098    6283 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 12:35:57.888869    6283 out.go:177] 
	W0108 12:35:57.909994    6283 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0108 12:35:57.930895    6283 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 status
functional_test.go:853: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:865: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (13.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run:  kubectl --context functional-123245 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1439: (dbg) Run:  kubectl --context functional-123245 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-5fcdfb5cc4-6tvzv" [c7b12ed4-7237-4c7b-8304-d6235e7a4f47] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:342: "hello-node-5fcdfb5cc4-6tvzv" [c7b12ed4-7237-4c7b-8304-d6235e7a4f47] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 6.008164274s
functional_test.go:1449: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1463: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1463: (dbg) Done: out/minikube-darwin-amd64 -p functional-123245 service --namespace=default --https --url hello-node: (2.029964591s)
functional_test.go:1476: found endpoint: https://127.0.0.1:50322
functional_test.go:1491: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1491: (dbg) Done: out/minikube-darwin-amd64 -p functional-123245 service hello-node --url --format={{.IP}}: (2.043593635s)
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1505: (dbg) Done: out/minikube-darwin-amd64 -p functional-123245 service hello-node --url: (2.028139121s)
functional_test.go:1511: found endpoint for hello-node: http://127.0.0.1:50348
--- PASS: TestFunctional/parallel/ServiceCmd (13.20s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1620: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 addons list
functional_test.go:1632: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (29.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [cda36005-3002-431f-aea4-00570c804b3c] Running
E0108 12:35:20.038564    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.011020246s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-123245 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-123245 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-123245 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-123245 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [50f1fb60-bee3-4905-8039-8a2b6ed82c2b] Pending
helpers_test.go:342: "sp-pod" [50f1fb60-bee3-4905-8039-8a2b6ed82c2b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [50f1fb60-bee3-4905-8039-8a2b6ed82c2b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.00989296s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-123245 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-123245 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-123245 delete -f testdata/storage-provisioner/pod.yaml: (1.065118479s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-123245 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [ef955ebc-4f90-4444-ba5c-892787642bf9] Pending
helpers_test.go:342: "sp-pod" [ef955ebc-4f90-4444-ba5c-892787642bf9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0108 12:35:40.519953    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
helpers_test.go:342: "sp-pod" [ef955ebc-4f90-4444-ba5c-892787642bf9] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.009225874s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-123245 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (29.77s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1655: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1672: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 ssh -n functional-123245 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 cp functional-123245:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelCpCmd2703456379/001/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 ssh -n functional-123245 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (31.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1720: (dbg) Run:  kubectl --context functional-123245 replace --force -f testdata/mysql.yaml
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-kc54n" [19268b19-1a85-43b5-acec-f0f6dec24c2c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-kc54n" [19268b19-1a85-43b5-acec-f0f6dec24c2c] Running
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.015032945s
functional_test.go:1734: (dbg) Run:  kubectl --context functional-123245 exec mysql-596b7fcdbf-kc54n -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-123245 exec mysql-596b7fcdbf-kc54n -- mysql -ppassword -e "show databases;": exit status 1 (230.769558ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-123245 exec mysql-596b7fcdbf-kc54n -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-123245 exec mysql-596b7fcdbf-kc54n -- mysql -ppassword -e "show databases;": exit status 1 (154.26806ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-123245 exec mysql-596b7fcdbf-kc54n -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-123245 exec mysql-596b7fcdbf-kc54n -- mysql -ppassword -e "show databases;": exit status 1 (119.690534ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-123245 exec mysql-596b7fcdbf-kc54n -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (31.92s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1856: Checking for existence of /etc/test/nested/copy/4083/hosts within VM
functional_test.go:1858: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 ssh "sudo cat /etc/test/nested/copy/4083/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1863: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/4083.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 ssh "sudo cat /etc/ssl/certs/4083.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /usr/share/ca-certificates/4083.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 ssh "sudo cat /usr/share/ca-certificates/4083.pem"
functional_test.go:1899: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/40832.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 ssh "sudo cat /etc/ssl/certs/40832.pem"
functional_test.go:1926: Checking for existence of /usr/share/ca-certificates/40832.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 ssh "sudo cat /usr/share/ca-certificates/40832.pem"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.77s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-123245 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 ssh "sudo systemctl is-active crio"
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-123245 ssh "sudo systemctl is-active crio": exit status 1 (447.079056ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2215: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-123245 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-123245 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [ddc0cd41-5087-403c-a4f3-c5db4b9444fc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [ddc0cd41-5087-403c-a4f3-c5db4b9444fc] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.016216768s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-123245 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-darwin-amd64 -p functional-123245 tunnel --alsologtostderr] ...
helpers_test.go:500: unable to terminate pid 6037: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "427.437216ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "82.807641ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "455.648744ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "92.037992ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-123245 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1921166319/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1673210148636002000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1921166319/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1673210148636002000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1921166319/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1673210148636002000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1921166319/001/test-1673210148636002000
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-123245 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (451.610055ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 ssh -- ls -la /mount-9p
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan  8 20:35 created-by-test
-rw-r--r-- 1 docker docker 24 Jan  8 20:35 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan  8 20:35 test-1673210148636002000
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 ssh cat /mount-9p/test-1673210148636002000
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-123245 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [0ddcd0a2-0005-4212-b209-fc171ccb0628] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [0ddcd0a2-0005-4212-b209-fc171ccb0628] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [0ddcd0a2-0005-4212-b209-fc171ccb0628] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [0ddcd0a2-0005-4212-b209-fc171ccb0628] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.008797936s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-123245 logs busybox-mount

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 ssh stat /mount-9p/created-by-pod

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-123245 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1921166319/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.71s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-123245 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port354245112/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-123245 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (485.583475ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 ssh -- ls -la /mount-9p
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-123245 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port354245112/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-123245 ssh "sudo umount -f /mount-9p": exit status 1 (496.392329ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-darwin-amd64 -p functional-123245 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-123245 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port354245112/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.80s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2183: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 version --short
--- PASS: TestFunctional/parallel/Version/short (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 version -o=json --components
functional_test.go:2197: (dbg) Done: out/minikube-darwin-amd64 -p functional-123245 version -o=json --components: (1.012390763s)
--- PASS: TestFunctional/parallel/Version/components (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 image ls --format short
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-123245 image ls --format short:
registry.k8s.io/pause:3.8
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-123245
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-123245
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 image ls --format table
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-123245 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | latest            | 1403e55ab369c | 142MB  |
| docker.io/library/nginx                     | alpine            | 1e415454686a6 | 40.7MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/pause                       | 3.8               | 4873874c08efc | 711kB  |
| registry.k8s.io/etcd                        | 3.5.4-0           | a8a176a5d5d69 | 300MB  |
| k8s.gcr.io/pause                            | 3.6               | 6270bb605e12e | 683kB  |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/localhost/my-image                | functional-123245 | dcea37857ab9d | 1.24MB |
| registry.k8s.io/kube-proxy                  | v1.25.3           | beaaf00edd38a | 61.7MB |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-apiserver              | v1.25.3           | 0346dbd74bcb9 | 128MB  |
| registry.k8s.io/kube-scheduler              | v1.25.3           | 6d23ec0e8b87e | 50.6MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| gcr.io/google-containers/addon-resizer      | functional-123245 | ffd4cfbbe753e | 32.9MB |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-123245 | 7347e9a705f1d | 30B    |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/kube-controller-manager     | v1.25.3           | 6039992312758 | 117MB  |
|---------------------------------------------|-------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 image ls --format json
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-123245 image ls --format json:
[{"id":"7347e9a705f1dd36497196fb2bc56dd3583c10d5fb4de87e456dfc9553128bcf","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-123245"],"size":"30"},{"id":"60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.25.3"],"size":"117000000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-123245"],"size":"32900000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003c
none\u003e"],"size":"43800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.8"],"size":"711000"},{"id":"a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.4-0"],"size":"300000000"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"dcea37857ab9d05b4f1cca279a3874725c41efb81a58668267b64d203defc07a","repoDigests":[],"repo
Tags":["docker.io/localhost/my-image:functional-123245"],"size":"1240000"},{"id":"1e415454686a67ed83fb7aaa41acb2472e7457061bcdbbf0f5143d7a1a89b36c","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"40700000"},{"id":"0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.25.3"],"size":"128000000"},{"id":"beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.25.3"],"size":"61700000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"1403e55ab369cd1c8039c34e6b4d47ca40bbde39c371254c7cba14756f472f52","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{
"id":"6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.25.3"],"size":"50600000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 image ls --format yaml
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-123245 image ls --format yaml:
- id: 1403e55ab369cd1c8039c34e6b4d47ca40bbde39c371254c7cba14756f472f52
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.25.3
size: "128000000"
- id: 6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.25.3
size: "50600000"
- id: 60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.25.3
size: "117000000"
- id: a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.4-0
size: "300000000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-123245
size: "32900000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 7347e9a705f1dd36497196fb2bc56dd3583c10d5fb4de87e456dfc9553128bcf
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-123245
size: "30"
- id: 4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.8
size: "711000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 1e415454686a67ed83fb7aaa41acb2472e7457061bcdbbf0f5143d7a1a89b36c
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "40700000"
- id: beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.25.3
size: "61700000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 ssh pgrep buildkitd
functional_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-123245 ssh pgrep buildkitd: exit status 1 (570.193435ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 image build -t localhost/my-image:functional-123245 testdata/build
functional_test.go:311: (dbg) Done: out/minikube-darwin-amd64 -p functional-123245 image build -t localhost/my-image:functional-123245 testdata/build: (5.138827092s)
functional_test.go:316: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-123245 image build -t localhost/my-image:functional-123245 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 431a7edf195a
Removing intermediate container 431a7edf195a
---> 23afbed0f6ff
Step 3/3 : ADD content.txt /
---> dcea37857ab9
Successfully built dcea37857ab9
Successfully tagged localhost/my-image:functional-123245
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:338: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.848383731s)
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-123245
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 image load --daemon gcr.io/google-containers/addon-resizer:functional-123245
functional_test.go:351: (dbg) Done: out/minikube-darwin-amd64 -p functional-123245 image load --daemon gcr.io/google-containers/addon-resizer:functional-123245: (4.319140155s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 image load --daemon gcr.io/google-containers/addon-resizer:functional-123245
functional_test.go:361: (dbg) Done: out/minikube-darwin-amd64 -p functional-123245 image load --daemon gcr.io/google-containers/addon-resizer:functional-123245: (2.382018573s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
2023/01/08 12:36:17 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.72953321s)
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-123245

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 image load --daemon gcr.io/google-containers/addon-resizer:functional-123245

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:241: (dbg) Done: out/minikube-darwin-amd64 -p functional-123245 image load --daemon gcr.io/google-containers/addon-resizer:functional-123245: (3.273985606s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.39s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:492: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-123245 docker-env) && out/minikube-darwin-amd64 status -p functional-123245"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/bash
functional_test.go:492: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-123245 docker-env) && out/minikube-darwin-amd64 status -p functional-123245": (1.14615483s)
functional_test.go:515: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-123245 docker-env) && docker images"
E0108 12:36:21.481617    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 image save gcr.io/google-containers/addon-resizer:functional-123245 /Users/jenkins/workspace/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Done: out/minikube-darwin-amd64 -p functional-123245 image save gcr.io/google-containers/addon-resizer:functional-123245 /Users/jenkins/workspace/addon-resizer-save.tar: (1.385488363s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 image rm gcr.io/google-containers/addon-resizer:functional-123245

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 image load /Users/jenkins/workspace/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Done: out/minikube-darwin-amd64 -p functional-123245 image load /Users/jenkins/workspace/addon-resizer-save.tar: (1.53359646s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-123245
functional_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123245 image save --daemon gcr.io/google-containers/addon-resizer:functional-123245
functional_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p functional-123245 image save --daemon gcr.io/google-containers/addon-resizer:functional-123245: (3.381774444s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-123245
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.52s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-123245
--- PASS: TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-123245
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-123245
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestJSONOutput/start/Command (52.78s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-124418 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E0108 12:44:59.542964    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-124418 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (52.777236784s)
--- PASS: TestJSONOutput/start/Command (52.78s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-124418 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-124418 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.27s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-124418 --output=json --user=testUser
E0108 12:45:16.832159    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-124418 --output=json --user=testUser: (12.269760545s)
--- PASS: TestJSONOutput/stop/Command (12.27s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.73s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-124527 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-124527 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (334.623522ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"613329e7-769c-431f-8838-be59588f69b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-124527] minikube v1.28.0 on Darwin 13.0.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"879f94ed-f52b-4a16-a748-b0a5c9518c4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15565"}}
	{"specversion":"1.0","id":"e827620b-f692-48de-8815-67b8b30f0520","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15565-2761/kubeconfig"}}
	{"specversion":"1.0","id":"21330112-9476-4ce9-9452-963fb5164137","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"874056af-93e5-47a0-a16d-78f701dc0d88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f35b4336-190d-461f-9265-fc81c60eeedc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube"}}
	{"specversion":"1.0","id":"e2168e28-e4b3-4720-a0ce-3317713f67a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-124527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-124527
--- PASS: TestErrorJSONOutput (0.73s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (31.29s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-124527 --network=
E0108 12:45:44.519509    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-124527 --network=: (28.629268091s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-124527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-124527
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-124527: (2.598475513s)
--- PASS: TestKicCustomNetwork/create_custom_network (31.29s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.28s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-124559 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-124559 --network=bridge: (29.830361311s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-124559" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-124559
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-124559: (2.393692202s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.28s)

                                                
                                    
x
+
TestKicExistingNetwork (31.24s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-124631 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-124631 --network=existing-network: (28.453568814s)
helpers_test.go:175: Cleaning up "existing-network-124631" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-124631
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-124631: (2.417839925s)
--- PASS: TestKicExistingNetwork (31.24s)

                                                
                                    
x
+
TestKicCustomSubnet (32.33s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-124702 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-124702 --subnet=192.168.60.0/24: (29.592499937s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-124702 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-124702" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-124702
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-124702: (2.674234516s)
--- PASS: TestKicCustomSubnet (32.33s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (65.7s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-124735 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-124735 --driver=docker : (29.726601343s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-124735 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-124735 --driver=docker : (28.986642356s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-124735
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-124735
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-124735" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-124735
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-124735: (2.5687785s)
helpers_test.go:175: Cleaning up "first-124735" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-124735
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-124735: (2.606996198s)
--- PASS: TestMinikubeProfile (65.70s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.24s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-124840 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-124840 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.242325744s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.24s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-124840 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.38s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-124840 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-124840 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.377630559s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-124840 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.14s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-124840 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-124840 --alsologtostderr -v=5: (2.141854653s)
--- PASS: TestMountStart/serial/DeleteFirst (2.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-124840 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.57s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-124840
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-124840: (1.565916721s)
--- PASS: TestMountStart/serial/Stop (1.57s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (5.28s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-124840
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-124840: (4.279230631s)
--- PASS: TestMountStart/serial/RestartStopped (5.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-124840 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (99.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-124908 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0108 12:49:59.539474    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
E0108 12:50:16.829126    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-124908 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m38.95041859s)
multinode_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (99.64s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-124908 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-124908 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-124908 -- rollout status deployment/busybox: (4.161397669s)
multinode_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-124908 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-124908 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-124908 -- exec busybox-65db55d5d6-2jztl -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-124908 -- exec busybox-65db55d5d6-k6vhx -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-124908 -- exec busybox-65db55d5d6-2jztl -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-124908 -- exec busybox-65db55d5d6-k6vhx -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-124908 -- exec busybox-65db55d5d6-2jztl -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-124908 -- exec busybox-65db55d5d6-k6vhx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.97s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-124908 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-124908 -- exec busybox-65db55d5d6-2jztl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-124908 -- exec busybox-65db55d5d6-2jztl -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-124908 -- exec busybox-65db55d5d6-k6vhx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-124908 -- exec busybox-65db55d5d6-k6vhx -- sh -c "ping -c 1 192.168.65.2"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (27.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-124908 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-124908 -v 3 --alsologtostderr: (26.762965838s)
multinode_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 status --alsologtostderr
E0108 12:51:22.626137    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
--- PASS: TestMultiNode/serial/AddNode (27.74s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.44s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (15.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 status --output json --alsologtostderr
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 cp testdata/cp-test.txt multinode-124908:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 ssh -n multinode-124908 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 cp multinode-124908:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile2803111938/001/cp-test_multinode-124908.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 ssh -n multinode-124908 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 cp multinode-124908:/home/docker/cp-test.txt multinode-124908-m02:/home/docker/cp-test_multinode-124908_multinode-124908-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 ssh -n multinode-124908 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 ssh -n multinode-124908-m02 "sudo cat /home/docker/cp-test_multinode-124908_multinode-124908-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 cp multinode-124908:/home/docker/cp-test.txt multinode-124908-m03:/home/docker/cp-test_multinode-124908_multinode-124908-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 ssh -n multinode-124908 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 ssh -n multinode-124908-m03 "sudo cat /home/docker/cp-test_multinode-124908_multinode-124908-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 cp testdata/cp-test.txt multinode-124908-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 ssh -n multinode-124908-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 cp multinode-124908-m02:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile2803111938/001/cp-test_multinode-124908-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 ssh -n multinode-124908-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 cp multinode-124908-m02:/home/docker/cp-test.txt multinode-124908:/home/docker/cp-test_multinode-124908-m02_multinode-124908.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 ssh -n multinode-124908-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 ssh -n multinode-124908 "sudo cat /home/docker/cp-test_multinode-124908-m02_multinode-124908.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 cp multinode-124908-m02:/home/docker/cp-test.txt multinode-124908-m03:/home/docker/cp-test_multinode-124908-m02_multinode-124908-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 ssh -n multinode-124908-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 ssh -n multinode-124908-m03 "sudo cat /home/docker/cp-test_multinode-124908-m02_multinode-124908-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 cp testdata/cp-test.txt multinode-124908-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 ssh -n multinode-124908-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 cp multinode-124908-m03:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile2803111938/001/cp-test_multinode-124908-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 ssh -n multinode-124908-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 cp multinode-124908-m03:/home/docker/cp-test.txt multinode-124908:/home/docker/cp-test_multinode-124908-m03_multinode-124908.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 ssh -n multinode-124908-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 ssh -n multinode-124908 "sudo cat /home/docker/cp-test_multinode-124908-m03_multinode-124908.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 cp multinode-124908-m03:/home/docker/cp-test.txt multinode-124908-m02:/home/docker/cp-test_multinode-124908-m03_multinode-124908-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 ssh -n multinode-124908-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 ssh -n multinode-124908-m02 "sudo cat /home/docker/cp-test_multinode-124908-m03_multinode-124908-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (15.01s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (13.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-darwin-amd64 -p multinode-124908 node stop m03: (12.29400176s)
multinode_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-124908 status: exit status 7 (767.760322ms)

                                                
                                                
-- stdout --
	multinode-124908
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-124908-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-124908-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-124908 status --alsologtostderr: exit status 7 (760.942738ms)

                                                
                                                
-- stdout --
	multinode-124908
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-124908-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-124908-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 12:51:51.501688   10005 out.go:296] Setting OutFile to fd 1 ...
	I0108 12:51:51.502302   10005 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 12:51:51.502312   10005 out.go:309] Setting ErrFile to fd 2...
	I0108 12:51:51.502322   10005 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 12:51:51.502577   10005 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-2761/.minikube/bin
	I0108 12:51:51.503181   10005 out.go:303] Setting JSON to false
	I0108 12:51:51.503233   10005 mustload.go:65] Loading cluster: multinode-124908
	I0108 12:51:51.503282   10005 notify.go:220] Checking for updates...
	I0108 12:51:51.503634   10005 config.go:180] Loaded profile config "multinode-124908": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 12:51:51.503648   10005 status.go:255] checking status of multinode-124908 ...
	I0108 12:51:51.504161   10005 cli_runner.go:164] Run: docker container inspect multinode-124908 --format={{.State.Status}}
	I0108 12:51:51.563361   10005 status.go:330] multinode-124908 host status = "Running" (err=<nil>)
	I0108 12:51:51.563386   10005 host.go:66] Checking if "multinode-124908" exists ...
	I0108 12:51:51.563660   10005 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-124908
	I0108 12:51:51.622043   10005 host.go:66] Checking if "multinode-124908" exists ...
	I0108 12:51:51.622321   10005 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 12:51:51.622392   10005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908
	I0108 12:51:51.683530   10005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51089 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/multinode-124908/id_rsa Username:docker}
	I0108 12:51:51.768172   10005 ssh_runner.go:195] Run: systemctl --version
	I0108 12:51:51.773430   10005 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 12:51:51.783523   10005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-124908
	I0108 12:51:51.842997   10005 kubeconfig.go:92] found "multinode-124908" server: "https://127.0.0.1:51088"
	I0108 12:51:51.843023   10005 api_server.go:165] Checking apiserver status ...
	I0108 12:51:51.843093   10005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 12:51:51.853565   10005 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1662/cgroup
	W0108 12:51:51.861971   10005 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1662/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0108 12:51:51.862040   10005 ssh_runner.go:195] Run: ls
	I0108 12:51:51.866131   10005 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51088/healthz ...
	I0108 12:51:51.871717   10005 api_server.go:278] https://127.0.0.1:51088/healthz returned 200:
	ok
	I0108 12:51:51.871731   10005 status.go:421] multinode-124908 apiserver status = Running (err=<nil>)
	I0108 12:51:51.871742   10005 status.go:257] multinode-124908 status: &{Name:multinode-124908 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0108 12:51:51.871754   10005 status.go:255] checking status of multinode-124908-m02 ...
	I0108 12:51:51.872051   10005 cli_runner.go:164] Run: docker container inspect multinode-124908-m02 --format={{.State.Status}}
	I0108 12:51:51.931833   10005 status.go:330] multinode-124908-m02 host status = "Running" (err=<nil>)
	I0108 12:51:51.931854   10005 host.go:66] Checking if "multinode-124908-m02" exists ...
	I0108 12:51:51.932130   10005 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-124908-m02
	I0108 12:51:51.991310   10005 host.go:66] Checking if "multinode-124908-m02" exists ...
	I0108 12:51:51.991592   10005 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 12:51:51.991652   10005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-124908-m02
	I0108 12:51:52.051448   10005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51153 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/multinode-124908-m02/id_rsa Username:docker}
	I0108 12:51:52.136062   10005 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 12:51:52.145674   10005 status.go:257] multinode-124908-m02 status: &{Name:multinode-124908-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0108 12:51:52.145707   10005 status.go:255] checking status of multinode-124908-m03 ...
	I0108 12:51:52.146026   10005 cli_runner.go:164] Run: docker container inspect multinode-124908-m03 --format={{.State.Status}}
	I0108 12:51:52.204166   10005 status.go:330] multinode-124908-m03 host status = "Stopped" (err=<nil>)
	I0108 12:51:52.204186   10005 status.go:343] host is not running, skipping remaining checks
	I0108 12:51:52.204195   10005 status.go:257] multinode-124908-m03 status: &{Name:multinode-124908-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (13.82s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (19.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-darwin-amd64 -p multinode-124908 node start m03 --alsologtostderr: (18.390538327s)
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (19.49s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (7.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-darwin-amd64 -p multinode-124908 node delete m03: (6.975559499s)
multinode_test.go:398: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (7.81s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 stop
E0108 12:56:39.916493    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
multinode_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p multinode-124908 stop: (24.477144847s)
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-124908 status: exit status 7 (169.833759ms)

                                                
                                                
-- stdout --
	multinode-124908
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-124908-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-124908 status --alsologtostderr: exit status 7 (168.586083ms)

                                                
                                                
-- stdout --
	multinode-124908
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-124908-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 12:56:49.735218   10704 out.go:296] Setting OutFile to fd 1 ...
	I0108 12:56:49.735384   10704 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 12:56:49.735390   10704 out.go:309] Setting ErrFile to fd 2...
	I0108 12:56:49.735394   10704 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 12:56:49.735504   10704 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-2761/.minikube/bin
	I0108 12:56:49.735698   10704 out.go:303] Setting JSON to false
	I0108 12:56:49.735722   10704 mustload.go:65] Loading cluster: multinode-124908
	I0108 12:56:49.735757   10704 notify.go:220] Checking for updates...
	I0108 12:56:49.736048   10704 config.go:180] Loaded profile config "multinode-124908": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0108 12:56:49.736064   10704 status.go:255] checking status of multinode-124908 ...
	I0108 12:56:49.736466   10704 cli_runner.go:164] Run: docker container inspect multinode-124908 --format={{.State.Status}}
	I0108 12:56:49.791428   10704 status.go:330] multinode-124908 host status = "Stopped" (err=<nil>)
	I0108 12:56:49.791446   10704 status.go:343] host is not running, skipping remaining checks
	I0108 12:56:49.791452   10704 status.go:257] multinode-124908 status: &{Name:multinode-124908 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0108 12:56:49.791480   10704 status.go:255] checking status of multinode-124908-m02 ...
	I0108 12:56:49.791765   10704 cli_runner.go:164] Run: docker container inspect multinode-124908-m02 --format={{.State.Status}}
	I0108 12:56:49.847683   10704 status.go:330] multinode-124908-m02 host status = "Stopped" (err=<nil>)
	I0108 12:56:49.847703   10704 status.go:343] host is not running, skipping remaining checks
	I0108 12:56:49.847709   10704 status.go:257] multinode-124908-m02 status: &{Name:multinode-124908-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.82s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (75.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-124908 --wait=true -v=8 --alsologtostderr --driver=docker 
multinode_test.go:352: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-124908 --wait=true -v=8 --alsologtostderr --driver=docker : (1m14.71327149s)
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-124908 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (75.59s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-124908
multinode_test.go:450: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-124908-m02 --driver=docker 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-124908-m02 --driver=docker : exit status 14 (342.630789ms)

                                                
                                                
-- stdout --
	* [multinode-124908-m02] minikube v1.28.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2761/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-124908-m02' is duplicated with machine name 'multinode-124908-m02' in profile 'multinode-124908'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-124908-m03 --driver=docker 
multinode_test.go:458: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-124908-m03 --driver=docker : (29.881291394s)
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-124908
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-124908: exit status 80 (506.542857ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-124908
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-124908-m03 already exists in multinode-124908-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-124908-m03
multinode_test.go:470: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-124908-m03: (2.627662647s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.42s)

                                                
                                    
x
+
TestPreload (194.15s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-125847 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
E0108 12:59:59.576428    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
E0108 13:00:16.864484    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-125847 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m48.18808092s)
preload_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-125847 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-darwin-amd64 ssh -p test-preload-125847 -- docker pull gcr.io/k8s-minikube/busybox: (3.094753638s)
preload_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-125847 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --kubernetes-version=v1.24.6
preload_test.go:67: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-125847 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --kubernetes-version=v1.24.6: (1m19.604253766s)
preload_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-125847 -- docker images
helpers_test.go:175: Cleaning up "test-preload-125847" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-125847
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-125847: (2.839985384s)
--- PASS: TestPreload (194.15s)

                                                
                                    
x
+
TestScheduledStopUnix (103.54s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-130202 --memory=2048 --driver=docker 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-130202 --memory=2048 --driver=docker : (29.195500767s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-130202 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-130202 -n scheduled-stop-130202
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-130202 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-130202 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-130202 -n scheduled-stop-130202
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-130202
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-130202 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-130202
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-130202: exit status 7 (116.377558ms)

                                                
                                                
-- stdout --
	scheduled-stop-130202
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-130202 -n scheduled-stop-130202
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-130202 -n scheduled-stop-130202: exit status 7 (114.084032ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-130202" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-130202
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-130202: (2.322214342s)
--- PASS: TestScheduledStopUnix (103.54s)

                                                
                                    
x
+
TestSkaffold (68.07s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe4119628071 version
skaffold_test.go:63: skaffold version: v2.0.4
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-130345 --memory=2600 --driver=docker 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-130345 --memory=2600 --driver=docker : (29.369595048s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe4119628071 run --minikube-profile skaffold-130345 --kube-context skaffold-130345 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe4119628071 run --minikube-profile skaffold-130345 --kube-context skaffold-130345 --status-check=true --port-forward=false --interactive=false: (22.971398441s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:342: "leeroy-app-7bc675cd7f-6wfv2" [96ecec20-4375-4bab-8abc-df7088ac9436] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.014264017s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:342: "leeroy-web-6b9d9dd7c6-6hzxs" [7a2b89bf-fdcb-4710-8ca3-bcb8468c8a8d] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.009393447s
helpers_test.go:175: Cleaning up "skaffold-130345" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-130345
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-130345: (2.969267256s)
--- PASS: TestSkaffold (68.07s)

                                                
                                    
x
+
TestInsufficientStorage (14.52s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-130453 --memory=2048 --output=json --wait=true --driver=docker 
E0108 13:04:59.572369    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-130453 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (11.312247219s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0cf8442e-02ee-4eb5-bfba-85cb45d8d72e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-130453] minikube v1.28.0 on Darwin 13.0.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"af4b25c9-1d05-490a-9753-44c4ef434a1b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15565"}}
	{"specversion":"1.0","id":"1da36849-007b-4d03-b3be-b0900320add8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15565-2761/kubeconfig"}}
	{"specversion":"1.0","id":"41f7b2c1-9403-4324-b6a9-a7a8876510b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"7ead6456-9f70-4045-b080-94e636c9e995","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"12d8d437-d339-4285-86f5-30cd6d2cb57e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube"}}
	{"specversion":"1.0","id":"3cee9f6b-26f8-48ff-b1ff-6302934b7724","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"18589423-f1b7-4756-bc10-87feee8fbdf1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"d9d791b4-6a62-4fa9-9081-d6858259c789","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"423ff820-10f9-48be-a688-74c95c7a009c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"3d7923ea-0464-49f4-a6cb-ce150d5a9f08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-130453 in cluster insufficient-storage-130453","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ef4333dd-cd73-42e9-b069-f6c5f1e657c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"6f87f005-7851-45d2-9fbc-0ae7f061b5c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"df480890-7652-4e53-b740-2e74fc7fdf6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-130453 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-130453 --output=json --layout=cluster: exit status 7 (398.793017ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-130453","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-130453","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 13:05:05.324452   12554 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-130453" does not appear in /Users/jenkins/minikube-integration/15565-2761/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-130453 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-130453 --output=json --layout=cluster: exit status 7 (398.353941ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-130453","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-130453","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 13:05:05.723675   12564 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-130453" does not appear in /Users/jenkins/minikube-integration/15565-2761/kubeconfig
	E0108 13:05:05.732718   12564 status.go:559] unable to read event log: stat: stat /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/insufficient-storage-130453/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-130453" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-130453
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-130453: (2.413997853s)
--- PASS: TestInsufficientStorage (14.52s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (10.26s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.28.0 on darwin
- MINIKUBE_LOCATION=15565
- KUBECONFIG=/Users/jenkins/minikube-integration/15565-2761/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current320836373/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current320836373/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current320836373/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current320836373/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
E0108 13:05:16.859344    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (10.26s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (14.2s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.28.0 on darwin
- MINIKUBE_LOCATION=15565
- KUBECONFIG=/Users/jenkins/minikube-integration/15565-2761/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current469156853/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current469156853/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current469156853/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current469156853/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (14.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.67s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.67s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.6s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-131031
version_upgrade_test.go:213: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-131031: (3.598782027s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.60s)

                                                
                                    
x
+
TestPause/serial/Start (42.69s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-131136 --memory=2048 --install-addons=false --wait=all --driver=docker 
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-131136 --memory=2048 --install-addons=false --wait=all --driver=docker : (42.687200274s)
--- PASS: TestPause/serial/Start (42.69s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (47.73s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-131136 --alsologtostderr -v=1 --driver=docker 
E0108 13:12:24.556983    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/skaffold-130345/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-131136 --alsologtostderr -v=1 --driver=docker : (47.722612083s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (47.73s)

                                                
                                    
x
+
TestPause/serial/Pause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-131136 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.73s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.42s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-131136 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-131136 --output=json --layout=cluster: exit status 2 (422.40001ms)

                                                
                                                
-- stdout --
	{"Name":"pause-131136","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-131136","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.42s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-131136 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.70s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.76s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-131136 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.76s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.64s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-131136 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-131136 --alsologtostderr -v=5: (2.635930416s)
--- PASS: TestPause/serial/DeletePaused (2.64s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.58s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-131136
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-131136: exit status 1 (56.271198ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-131136

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-131313 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-131313 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (403.823203ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-131313] minikube v1.28.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2761/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (29.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-131313 --driver=docker 
E0108 13:13:19.996866    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-131313 --driver=docker : (29.228855639s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-131313 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (29.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-131313 --no-kubernetes --driver=docker 

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-131313 --no-kubernetes --driver=docker : (15.712238674s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-131313 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-131313 status -o json: exit status 2 (436.155322ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-131313","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-131313
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-131313: (2.460214867s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-131313 --no-kubernetes --driver=docker 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-131313 --no-kubernetes --driver=docker : (6.716734153s)
--- PASS: TestNoKubernetes/serial/Start (6.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-131313 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-131313 "sudo systemctl is-active --quiet service kubelet": exit status 1 (391.680549ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (17.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-amd64 profile list: (16.361103319s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (17.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-131313
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-131313: (1.624469839s)
--- PASS: TestNoKubernetes/serial/Stop (1.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (4.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-131313 --driver=docker 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-131313 --driver=docker : (4.280403323s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (4.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-131313 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-131313 "sudo systemctl is-active --quiet service kubelet": exit status 1 (379.976998ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (46.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-130508 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker 
E0108 13:14:40.715796    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/skaffold-130345/client.crt: no such file or directory
E0108 13:14:59.660135    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
E0108 13:15:08.398768    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/skaffold-130345/client.crt: no such file or directory
E0108 13:15:16.947159    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p auto-130508 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker : (46.34897283s)
--- PASS: TestNetworkPlugins/group/auto/Start (46.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-130508 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-130508 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-hdttm" [a3f035ad-19ff-44cc-b209-766534d2b033] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-hdttm" [a3f035ad-19ff-44cc-b209-766534d2b033] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.010834123s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-130508 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-130508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-130508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context auto-130508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.132076484s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (62.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-130508 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker 
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-130508 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker : (1m2.406610542s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (62.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-8gnmc" [48adde6a-9d07-4bd6-8615-5679b571dc83] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.016134091s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-130508 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (14.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-130508 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-n5clv" [777c4b17-6e87-4d23-a46c-9a3b10b39c45] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-n5clv" [777c4b17-6e87-4d23-a46c-9a3b10b39c45] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 14.010629646s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (14.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-130508 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-130508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-130508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (47.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-130508 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker 
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-130508 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker : (47.012316903s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (47.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-130508 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-130508 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-d7pt2" [aa438145-b4b8-4603-8337-4cce17b01729] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-d7pt2" [aa438145-b4b8-4603-8337-4cce17b01729] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 14.029607871s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-130508 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:188: (dbg) Run:  kubectl --context enable-default-cni-130508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:238: (dbg) Run:  kubectl --context enable-default-cni-130508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (46.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p false-130508 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p false-130508 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker : (46.147625203s)
--- PASS: TestNetworkPlugins/group/false/Start (46.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-130508 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (15.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context false-130508 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-b54hx" [86971012-cc51-4206-a450-b54784453d62] Pending
helpers_test.go:342: "netcat-5788d667bd-b54hx" [86971012-cc51-4206-a450-b54784453d62] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-b54hx" [86971012-cc51-4206-a450-b54784453d62] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 15.010977772s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (15.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (48.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-130508 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-130508 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker : (48.564424461s)
--- PASS: TestNetworkPlugins/group/bridge/Start (48.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-130508 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:188: (dbg) Run:  kubectl --context false-130508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:238: (dbg) Run:  kubectl --context false-130508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context false-130508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.118489445s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (46.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-130508 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker 
E0108 13:19:40.716274    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/skaffold-130345/client.crt: no such file or directory
E0108 13:19:59.660936    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-130508 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker : (46.436742414s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (46.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-130508 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (15.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-130508 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-qxv99" [0aeb98df-921c-4c60-85cc-cad4d30b3091] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-qxv99" [0aeb98df-921c-4c60-85cc-cad4d30b3091] Running
E0108 13:20:16.947708    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 15.008041063s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (15.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-130508 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (13.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kubenet-130508 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-dt2bl" [f3b55fdd-0864-41b2-b6f7-beb0f27129cb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-dt2bl" [f3b55fdd-0864-41b2-b6f7-beb0f27129cb] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 13.00802655s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (13.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-130508 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:188: (dbg) Run:  kubectl --context bridge-130508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:238: (dbg) Run:  kubectl --context bridge-130508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (97.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p cilium-130509 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker 
E0108 13:20:21.383575    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/auto-130508/client.crt: no such file or directory
E0108 13:20:21.389365    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/auto-130508/client.crt: no such file or directory
E0108 13:20:21.400664    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/auto-130508/client.crt: no such file or directory
E0108 13:20:21.420784    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/auto-130508/client.crt: no such file or directory
E0108 13:20:21.460946    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/auto-130508/client.crt: no such file or directory
E0108 13:20:21.543050    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/auto-130508/client.crt: no such file or directory
E0108 13:20:21.703362    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/auto-130508/client.crt: no such file or directory
E0108 13:20:22.025198    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/auto-130508/client.crt: no such file or directory
E0108 13:20:22.665366    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/auto-130508/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p cilium-130509 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker : (1m37.846004068s)
--- PASS: TestNetworkPlugins/group/cilium/Start (97.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-130508 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kubenet-130508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (329.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-130509 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker 
E0108 13:21:43.309467    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/auto-130508/client.crt: no such file or directory
E0108 13:21:44.875114    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kindnet-130508/client.crt: no such file or directory
E0108 13:21:44.880521    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kindnet-130508/client.crt: no such file or directory
E0108 13:21:44.890820    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kindnet-130508/client.crt: no such file or directory
E0108 13:21:44.911783    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kindnet-130508/client.crt: no such file or directory
E0108 13:21:44.952659    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kindnet-130508/client.crt: no such file or directory
E0108 13:21:45.033973    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kindnet-130508/client.crt: no such file or directory
E0108 13:21:45.194710    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kindnet-130508/client.crt: no such file or directory
E0108 13:21:45.515280    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kindnet-130508/client.crt: no such file or directory
E0108 13:21:46.155503    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kindnet-130508/client.crt: no such file or directory
E0108 13:21:47.441010    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kindnet-130508/client.crt: no such file or directory
E0108 13:21:50.001483    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kindnet-130508/client.crt: no such file or directory
E0108 13:21:55.121685    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kindnet-130508/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p calico-130509 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker : (5m29.789822736s)
--- PASS: TestNetworkPlugins/group/calico/Start (329.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-x8rmn" [1beb9513-bef0-41ea-a062-96db26870b3a] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.016417403s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cilium-130509 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (15.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-130509 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-b6m62" [ca61c6c5-98ec-4d88-8621-3b5e282b40be] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0108 13:22:05.362016    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kindnet-130508/client.crt: no such file or directory
helpers_test.go:342: "netcat-5788d667bd-b6m62" [ca61c6c5-98ec-4d88-8621-3b5e282b40be] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 15.008392277s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (15.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-130509 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-130509 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-130509 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:342: "calico-node-wxn5l" [6d2d9540-3876-4c17-abaa-85aa4ad89189] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E0108 13:26:59.098574    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/cilium-130509/client.crt: no such file or directory
E0108 13:26:59.103647    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/cilium-130509/client.crt: no such file or directory
E0108 13:26:59.113742    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/cilium-130509/client.crt: no such file or directory
E0108 13:26:59.134651    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/cilium-130509/client.crt: no such file or directory
E0108 13:26:59.175181    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/cilium-130509/client.crt: no such file or directory
E0108 13:26:59.255372    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/cilium-130509/client.crt: no such file or directory
E0108 13:26:59.415741    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/cilium-130509/client.crt: no such file or directory
E0108 13:26:59.736120    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/cilium-130509/client.crt: no such file or directory
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.01442211s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-130509 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context calico-130509 replace --force -f testdata/netcat-deployment.yaml
E0108 13:27:00.376290    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/cilium-130509/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-78fqk" [0890f972-a3dd-4071-abd6-503aad1c1d2b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0108 13:27:01.658402    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/cilium-130509/client.crt: no such file or directory
E0108 13:27:04.218657    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/cilium-130509/client.crt: no such file or directory
helpers_test.go:342: "netcat-5788d667bd-78fqk" [0890f972-a3dd-4071-abd6-503aad1c1d2b] Running
E0108 13:27:09.338862    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/cilium-130509/client.crt: no such file or directory
E0108 13:27:12.564596    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kindnet-130508/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.009513772s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Run:  kubectl --context calico-130509 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:188: (dbg) Run:  kubectl --context calico-130509 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:238: (dbg) Run:  kubectl --context calico-130509 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)
E0108 13:48:59.442308    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (56.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-132717 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.25.3
E0108 13:27:19.579666    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/cilium-130509/client.crt: no such file or directory
E0108 13:27:40.060220    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/cilium-130509/client.crt: no such file or directory
E0108 13:27:46.910026    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/bridge-130508/client.crt: no such file or directory
E0108 13:27:53.885137    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubenet-130508/client.crt: no such file or directory
E0108 13:27:55.372185    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/enable-default-cni-130508/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-132717 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.25.3: (56.434177491s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (56.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-132223 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-132223 --alsologtostderr -v=3: (1.683336161s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-132223 -n old-k8s-version-132223
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-132223 -n old-k8s-version-132223: exit status 7 (151.772036ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-132223 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-132717 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [8c8e6e8a-b84c-4081-88c0-c7fc9b7289d9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [8c8e6e8a-b84c-4081-88c0-c7fc9b7289d9] Running
E0108 13:28:21.021838    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/cilium-130509/client.crt: no such file or directory
E0108 13:28:23.061079    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/enable-default-cni-130508/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.017862302s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-132717 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-132717 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-132717 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-132717 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-132717 --alsologtostderr -v=3: (12.399725121s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-132717 -n no-preload-132717
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-132717 -n no-preload-132717: exit status 7 (117.658921ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-132717 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (303.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-132717 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.25.3
E0108 13:28:59.407305    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/client.crt: no such file or directory
E0108 13:29:27.098525    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/client.crt: no such file or directory
E0108 13:29:40.718812    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/skaffold-130345/client.crt: no such file or directory
E0108 13:29:42.942504    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/cilium-130509/client.crt: no such file or directory
E0108 13:29:59.662959    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
E0108 13:30:00.001738    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
E0108 13:30:03.060528    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/bridge-130508/client.crt: no such file or directory
E0108 13:30:10.042088    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubenet-130508/client.crt: no such file or directory
E0108 13:30:16.950907    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
E0108 13:30:21.384465    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/auto-130508/client.crt: no such file or directory
E0108 13:30:30.750920    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/bridge-130508/client.crt: no such file or directory
E0108 13:30:37.728118    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubenet-130508/client.crt: no such file or directory
E0108 13:31:44.877365    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kindnet-130508/client.crt: no such file or directory
E0108 13:31:54.923477    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/calico-130509/client.crt: no such file or directory
E0108 13:31:54.929466    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/calico-130509/client.crt: no such file or directory
E0108 13:31:54.941007    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/calico-130509/client.crt: no such file or directory
E0108 13:31:54.961131    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/calico-130509/client.crt: no such file or directory
E0108 13:31:55.003237    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/calico-130509/client.crt: no such file or directory
E0108 13:31:55.085468    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/calico-130509/client.crt: no such file or directory
E0108 13:31:55.245908    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/calico-130509/client.crt: no such file or directory
E0108 13:31:55.565991    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/calico-130509/client.crt: no such file or directory
E0108 13:31:56.206452    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/calico-130509/client.crt: no such file or directory
E0108 13:31:57.487044    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/calico-130509/client.crt: no such file or directory
E0108 13:31:59.099907    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/cilium-130509/client.crt: no such file or directory
E0108 13:32:00.047791    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/calico-130509/client.crt: no such file or directory
E0108 13:32:05.168085    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/calico-130509/client.crt: no such file or directory
E0108 13:32:15.408369    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/calico-130509/client.crt: no such file or directory
E0108 13:32:26.784828    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/cilium-130509/client.crt: no such file or directory
E0108 13:32:35.890685    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/calico-130509/client.crt: no such file or directory
E0108 13:32:55.371886    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/enable-default-cni-130508/client.crt: no such file or directory
E0108 13:33:16.851099    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/calico-130509/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-132717 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.25.3: (5m2.967769464s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-132717 -n no-preload-132717
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (303.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (20.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-hb2t8" [a6fcd044-0251-43bb-90b3-a7d772eb6b8b] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-hb2t8" [a6fcd044-0251-43bb-90b3-a7d772eb6b8b] Running
E0108 13:33:59.432914    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/false-130508/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 20.014879861s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (20.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-hb2t8" [a6fcd044-0251-43bb-90b3-a7d772eb6b8b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007702228s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-132717 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-132717 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-132717 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-132717 -n no-preload-132717
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-132717 -n no-preload-132717: exit status 2 (426.69414ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-132717 -n no-preload-132717
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-132717 -n no-preload-132717: exit status 2 (427.265609ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-132717 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-132717 -n no-preload-132717
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-132717 -n no-preload-132717
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (45.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-133414 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.25.3
E0108 13:34:38.801166    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/calico-130509/client.crt: no such file or directory
E0108 13:34:40.750642    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/skaffold-130345/client.crt: no such file or directory
E0108 13:34:59.693819    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-133414 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.25.3: (45.735565266s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (45.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-133414 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [41a48547-c35b-47ba-8569-a66f01ba86a0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0108 13:35:03.091355    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/bridge-130508/client.crt: no such file or directory
helpers_test.go:342: "busybox" [41a48547-c35b-47ba-8569-a66f01ba86a0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.013375581s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-133414 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-133414 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-133414 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-133414 --alsologtostderr -v=3
E0108 13:35:10.072492    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kubenet-130508/client.crt: no such file or directory
E0108 13:35:16.981616    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
E0108 13:35:21.415466    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/auto-130508/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-133414 --alsologtostderr -v=3: (12.42754147s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-133414 -n embed-certs-133414
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-133414 -n embed-certs-133414: exit status 7 (114.518724ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-133414 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (301.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-133414 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-133414 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.25.3: (5m1.238605371s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-133414 -n embed-certs-133414
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (301.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (20.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-dhqvv" [037b9f51-e819-4c59-a086-e567dea5ee07] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-dhqvv" [037b9f51-e819-4c59-a086-e567dea5ee07] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 20.024092657s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (20.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-dhqvv" [037b9f51-e819-4c59-a086-e567dea5ee07] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009648364s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-133414 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-133414 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-133414 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-133414 -n embed-certs-133414
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-133414 -n embed-certs-133414: exit status 2 (469.889698ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-133414 -n embed-certs-133414
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-133414 -n embed-certs-133414: exit status 2 (425.328342ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-133414 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-133414 -n embed-certs-133414
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-133414 -n embed-certs-133414
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-134057 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.25.3
E0108 13:40:58.230994    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/no-preload-132717/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-134057 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.25.3: (45.771599307s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-134057 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [db0e07aa-6f49-4b7e-b783-64e7b0cf476e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0108 13:41:44.911338    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/kindnet-130508/client.crt: no such file or directory
helpers_test.go:342: "busybox" [db0e07aa-6f49-4b7e-b783-64e7b0cf476e] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.014636765s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-134057 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-134057 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-134057 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-134057 --alsologtostderr -v=3
E0108 13:41:54.955175    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/calico-130509/client.crt: no such file or directory
E0108 13:41:59.133329    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/cilium-130509/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-134057 --alsologtostderr -v=3: (12.48397646s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-134057 -n default-k8s-diff-port-134057
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-134057 -n default-k8s-diff-port-134057: exit status 7 (114.215236ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-134057 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (296.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-134057 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-134057 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.25.3: (4m55.543077274s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-134057 -n default-k8s-diff-port-134057
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (296.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (19.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-748dp" [cd34c706-b3b4-4521-816a-46d558eff783] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-748dp" [cd34c706-b3b4-4521-816a-46d558eff783] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 19.019383162s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (19.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-748dp" [cd34c706-b3b4-4521-816a-46d558eff783] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007100873s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-134057 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-diff-port-134057 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-134057 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-134057 -n default-k8s-diff-port-134057
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-134057 -n default-k8s-diff-port-134057: exit status 2 (426.211369ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-134057 -n default-k8s-diff-port-134057
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-134057 -n default-k8s-diff-port-134057: exit status 2 (429.095838ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-134057 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-134057 -n default-k8s-diff-port-134057
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-134057 -n default-k8s-diff-port-134057
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (42.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-134733 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-134733 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.25.3: (42.802980134s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (42.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-134733 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-134733 --alsologtostderr -v=3
E0108 13:48:18.008337    4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/calico-130509/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-134733 --alsologtostderr -v=3: (12.51584722s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-134733 -n newest-cni-134733
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-134733 -n newest-cni-134733: exit status 7 (113.227164ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-134733 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (19.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-134733 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-134733 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.25.3: (18.862240953s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-134733 -n newest-cni-134733
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (19.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-134733 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-134733 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-134733 -n newest-cni-134733

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-134733 -n newest-cni-134733: exit status 2 (480.813888ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-134733 -n newest-cni-134733
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-134733 -n newest-cni-134733: exit status 2 (422.596142ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-134733 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-134733 -n newest-cni-134733
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-134733 -n newest-cni-134733
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.38s)

                                                
                                    

Test skip (18/295)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.25.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.25.3/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:287: registry stabilized in 12.166625ms
addons_test.go:289: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-hc8wj" [b28c8a62-1f51-4a39-9ada-5c21b6f75a43] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:289: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.010832448s
addons_test.go:292: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-v2cr7" [60a53f2c-bb8c-46e9-953f-073e61da0cb8] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:292: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.011737479s
addons_test.go:297: (dbg) Run:  kubectl --context addons-122726 delete po -l run=registry-test --now
addons_test.go:302: (dbg) Run:  kubectl --context addons-122726 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:302: (dbg) Done: kubectl --context addons-122726 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.848924912s)
addons_test.go:312: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (15.96s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (12.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:169: (dbg) Run:  kubectl --context addons-122726 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:189: (dbg) Run:  kubectl --context addons-122726 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:202: (dbg) Run:  kubectl --context addons-122726 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:207: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [8f1ce9dd-0baa-4181-bb67-0060316c838f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [8f1ce9dd-0baa-4181-bb67-0060316c838f] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.006267275s
addons_test.go:219: (dbg) Run:  out/minikube-darwin-amd64 -p addons-122726 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:239: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (12.26s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:455: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (14.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1559: (dbg) Run:  kubectl --context functional-123245 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1565: (dbg) Run:  kubectl --context functional-123245 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-6458c8fb6f-cqtbj" [5382b6fe-9fa6-43c6-9331-c044d4ddb7c2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-6458c8fb6f-cqtbj" [5382b6fe-9fa6-43c6-9331-c044d4ddb7c2] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 14.011454065s
functional_test.go:1576: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (14.18s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:543: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-130508" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p flannel-130508
--- SKIP: TestNetworkPlugins/group/flannel (0.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (0.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-130508" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-flannel-130508
--- SKIP: TestNetworkPlugins/group/custom-flannel (0.55s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-134056" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-134056
--- SKIP: TestStartStop/group/disable-driver-mounts (0.42s)

                                                
                                    
Copied to clipboard