Test Report: Docker_Linux 12739

                    
                      68de712bd09ffe1e21223c2fc0b3d10921a9e762:2022-05-12:23920
                    
                

Test fail (7/281)

x
+
TestNetworkPlugins/group/calico/Start (520.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20220512231715-516044 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=docker
E0512 23:27:07.582318  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/functional-20220512225541-516044/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-20220512231715-516044 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=docker: exit status 80 (8m40.69156612s)

                                                
                                                
-- stdout --
	* [calico-20220512231715-516044] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	* Using Docker driver with the root privilege
	* Starting control plane node calico-20220512231715-516044 in cluster calico-20220512231715-516044
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.5 on Docker 20.10.15 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0512 23:27:05.390332  817261 out.go:296] Setting OutFile to fd 1 ...
	I0512 23:27:05.390679  817261 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 23:27:05.390729  817261 out.go:309] Setting ErrFile to fd 2...
	I0512 23:27:05.390754  817261 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 23:27:05.390951  817261 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/bin
	I0512 23:27:05.391356  817261 out.go:303] Setting JSON to false
	I0512 23:27:05.393485  817261 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":22181,"bootTime":1652375844,"procs":1086,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0512 23:27:05.393581  817261 start.go:125] virtualization: kvm guest
	I0512 23:27:05.396217  817261 out.go:177] * [calico-20220512231715-516044] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0512 23:27:05.398402  817261 out.go:177]   - MINIKUBE_LOCATION=12739
	I0512 23:27:05.398409  817261 notify.go:193] Checking for updates...
	I0512 23:27:05.401204  817261 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0512 23:27:05.402704  817261 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/kubeconfig
	I0512 23:27:05.404161  817261 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube
	I0512 23:27:05.405527  817261 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0512 23:27:05.407359  817261 config.go:178] Loaded profile config "cilium-20220512231715-516044": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 23:27:05.407521  817261 config.go:178] Loaded profile config "embed-certs-20220512231813-516044": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 23:27:05.407639  817261 config.go:178] Loaded profile config "false-20220512231715-516044": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 23:27:05.407716  817261 driver.go:358] Setting default libvirt URI to qemu:///system
	I0512 23:27:05.452046  817261 docker.go:137] docker version: linux-20.10.16
	I0512 23:27:05.452139  817261 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 23:27:05.567389  817261 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:93 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:49 SystemTime:2022-05-12 23:27:05.484631014 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0512 23:27:05.567505  817261 docker.go:254] overlay module found
	I0512 23:27:05.569620  817261 out.go:177] * Using the docker driver based on user configuration
	I0512 23:27:05.570855  817261 start.go:284] selected driver: docker
	I0512 23:27:05.570872  817261 start.go:806] validating driver "docker" against <nil>
	I0512 23:27:05.570897  817261 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0512 23:27:05.571942  817261 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 23:27:05.696526  817261 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:93 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:49 SystemTime:2022-05-12 23:27:05.607643307 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0512 23:27:05.696665  817261 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0512 23:27:05.696876  817261 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0512 23:27:05.698513  817261 out.go:177] * Using Docker driver with the root privilege
	I0512 23:27:05.699772  817261 cni.go:95] Creating CNI manager for "calico"
	I0512 23:27:05.699801  817261 start_flags.go:301] Found "Calico" CNI - setting NetworkPlugin=cni
	I0512 23:27:05.699814  817261 start_flags.go:306] config:
	{Name:calico-20220512231715-516044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:calico-20220512231715-516044 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0512 23:27:05.701506  817261 out.go:177] * Starting control plane node calico-20220512231715-516044 in cluster calico-20220512231715-516044
	I0512 23:27:05.702830  817261 cache.go:120] Beginning downloading kic base image for docker with docker
	I0512 23:27:05.704089  817261 out.go:177] * Pulling base image ...
	I0512 23:27:05.705452  817261 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 23:27:05.705498  817261 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0512 23:27:05.705512  817261 cache.go:57] Caching tarball of preloaded images
	I0512 23:27:05.705533  817261 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0512 23:27:05.705771  817261 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0512 23:27:05.705792  817261 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0512 23:27:05.706244  817261 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/calico-20220512231715-516044/config.json ...
	I0512 23:27:05.706340  817261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/calico-20220512231715-516044/config.json: {Name:mk7a98d33c72f65dcfdccec9d36c7fd9c4026a18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:27:05.761250  817261 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon, skipping pull
	I0512 23:27:05.761293  817261 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in daemon, skipping load
	I0512 23:27:05.761311  817261 cache.go:206] Successfully downloaded all kic artifacts
	I0512 23:27:05.761365  817261 start.go:352] acquiring machines lock for calico-20220512231715-516044: {Name:mk080b3fa2005c8ccaff4cd929636a22a9892bd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0512 23:27:05.761547  817261 start.go:356] acquired machines lock for "calico-20220512231715-516044" in 152.417µs
	I0512 23:27:05.761589  817261 start.go:91] Provisioning new machine with config: &{Name:calico-20220512231715-516044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:calico-20220512231715-516044 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0512 23:27:05.761712  817261 start.go:131] createHost starting for "" (driver="docker")
	I0512 23:27:05.764911  817261 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0512 23:27:05.765231  817261 start.go:165] libmachine.API.Create for "calico-20220512231715-516044" (driver="docker")
	I0512 23:27:05.765278  817261 client.go:168] LocalClient.Create starting
	I0512 23:27:05.765357  817261 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca.pem
	I0512 23:27:05.765389  817261 main.go:134] libmachine: Decoding PEM data...
	I0512 23:27:05.765403  817261 main.go:134] libmachine: Parsing certificate...
	I0512 23:27:05.765481  817261 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/cert.pem
	I0512 23:27:05.765498  817261 main.go:134] libmachine: Decoding PEM data...
	I0512 23:27:05.765511  817261 main.go:134] libmachine: Parsing certificate...
	I0512 23:27:05.765846  817261 cli_runner.go:164] Run: docker network inspect calico-20220512231715-516044 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0512 23:27:05.800900  817261 cli_runner.go:211] docker network inspect calico-20220512231715-516044 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0512 23:27:05.801002  817261 network_create.go:272] running [docker network inspect calico-20220512231715-516044] to gather additional debugging logs...
	I0512 23:27:05.801029  817261 cli_runner.go:164] Run: docker network inspect calico-20220512231715-516044
	W0512 23:27:05.834507  817261 cli_runner.go:211] docker network inspect calico-20220512231715-516044 returned with exit code 1
	I0512 23:27:05.834549  817261 network_create.go:275] error running [docker network inspect calico-20220512231715-516044]: docker network inspect calico-20220512231715-516044: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220512231715-516044
	I0512 23:27:05.834566  817261 network_create.go:277] output of [docker network inspect calico-20220512231715-516044]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220512231715-516044
	
	** /stderr **
	I0512 23:27:05.834620  817261 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0512 23:27:05.866029  817261 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-27d2f9c8191f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:f5:4a:35:55}}
	I0512 23:27:05.866660  817261 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-89430e854caa IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:3c:30:bc:16}}
	I0512 23:27:05.867128  817261 network.go:240] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-37f634322f53 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:0b:a9:b0:99}}
	I0512 23:27:05.867737  817261 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.76.0:0xc0007a2470] misses:0}
	I0512 23:27:05.867788  817261 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 23:27:05.867808  817261 network_create.go:115] attempt to create docker network calico-20220512231715-516044 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0512 23:27:05.867851  817261 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220512231715-516044
	I0512 23:27:05.941496  817261 network_create.go:99] docker network calico-20220512231715-516044 192.168.76.0/24 created
	I0512 23:27:05.941533  817261 kic.go:106] calculated static IP "192.168.76.2" for the "calico-20220512231715-516044" container
	I0512 23:27:05.941600  817261 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0512 23:27:05.976156  817261 cli_runner.go:164] Run: docker volume create calico-20220512231715-516044 --label name.minikube.sigs.k8s.io=calico-20220512231715-516044 --label created_by.minikube.sigs.k8s.io=true
	I0512 23:27:06.008207  817261 oci.go:103] Successfully created a docker volume calico-20220512231715-516044
	I0512 23:27:06.008304  817261 cli_runner.go:164] Run: docker run --rm --name calico-20220512231715-516044-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220512231715-516044 --entrypoint /usr/bin/test -v calico-20220512231715-516044:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c -d /var/lib
	I0512 23:27:06.641204  817261 oci.go:107] Successfully prepared a docker volume calico-20220512231715-516044
	I0512 23:27:06.641261  817261 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 23:27:06.641286  817261 kic.go:179] Starting extracting preloaded images to volume ...
	I0512 23:27:06.641368  817261 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220512231715-516044:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c -I lz4 -xf /preloaded.tar -C /extractDir
	I0512 23:27:13.390534  817261 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220512231715-516044:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c -I lz4 -xf /preloaded.tar -C /extractDir: (6.749077877s)
	I0512 23:27:13.390572  817261 kic.go:188] duration metric: took 6.749281 seconds to extract preloaded images to volume
	W0512 23:27:13.390725  817261 cgroups_linux.go:88] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0512 23:27:13.390861  817261 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0512 23:27:13.513043  817261 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220512231715-516044 --name calico-20220512231715-516044 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220512231715-516044 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220512231715-516044 --network calico-20220512231715-516044 --ip 192.168.76.2 --volume calico-20220512231715-516044:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c
	I0512 23:27:13.926475  817261 cli_runner.go:164] Run: docker container inspect calico-20220512231715-516044 --format={{.State.Running}}
	I0512 23:27:13.960922  817261 cli_runner.go:164] Run: docker container inspect calico-20220512231715-516044 --format={{.State.Status}}
	I0512 23:27:13.993205  817261 cli_runner.go:164] Run: docker exec calico-20220512231715-516044 stat /var/lib/dpkg/alternatives/iptables
	I0512 23:27:14.099601  817261 oci.go:144] the created container "calico-20220512231715-516044" has a running status.
	I0512 23:27:14.099645  817261 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/calico-20220512231715-516044/id_rsa...
	I0512 23:27:14.273023  817261 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/calico-20220512231715-516044/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0512 23:27:14.365205  817261 cli_runner.go:164] Run: docker container inspect calico-20220512231715-516044 --format={{.State.Status}}
	I0512 23:27:14.401843  817261 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0512 23:27:14.401866  817261 kic_runner.go:114] Args: [docker exec --privileged calico-20220512231715-516044 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0512 23:27:14.509886  817261 cli_runner.go:164] Run: docker container inspect calico-20220512231715-516044 --format={{.State.Status}}
	I0512 23:27:14.545788  817261 machine.go:88] provisioning docker machine ...
	I0512 23:27:14.545853  817261 ubuntu.go:169] provisioning hostname "calico-20220512231715-516044"
	I0512 23:27:14.545925  817261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512231715-516044
	I0512 23:27:14.581316  817261 main.go:134] libmachine: Using SSH client type: native
	I0512 23:27:14.581570  817261 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0512 23:27:14.581593  817261 main.go:134] libmachine: About to run SSH command:
	sudo hostname calico-20220512231715-516044 && echo "calico-20220512231715-516044" | sudo tee /etc/hostname
	I0512 23:27:14.735234  817261 main.go:134] libmachine: SSH cmd err, output: <nil>: calico-20220512231715-516044
	
	I0512 23:27:14.735324  817261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512231715-516044
	I0512 23:27:14.766796  817261 main.go:134] libmachine: Using SSH client type: native
	I0512 23:27:14.766941  817261 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0512 23:27:14.766961  817261 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20220512231715-516044' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20220512231715-516044/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20220512231715-516044' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0512 23:27:14.896825  817261 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0512 23:27:14.896859  817261 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/key.pem
ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube}
	I0512 23:27:14.896887  817261 ubuntu.go:177] setting up certificates
	I0512 23:27:14.896898  817261 provision.go:83] configureAuth start
	I0512 23:27:14.896950  817261 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220512231715-516044
	I0512 23:27:14.928030  817261 provision.go:138] copyHostCerts
	I0512 23:27:14.928093  817261 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/ca.pem, removing ...
	I0512 23:27:14.928104  817261 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/ca.pem
	I0512 23:27:14.928174  817261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/ca.pem (1078 bytes)
	I0512 23:27:14.928263  817261 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cert.pem, removing ...
	I0512 23:27:14.928277  817261 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cert.pem
	I0512 23:27:14.928301  817261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cert.pem (1123 bytes)
	I0512 23:27:14.928356  817261 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/key.pem, removing ...
	I0512 23:27:14.928364  817261 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/key.pem
	I0512 23:27:14.928384  817261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/key.pem (1675 bytes)
	I0512 23:27:14.928436  817261 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca-key.pem org=jenkins.calico-20220512231715-516044 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20220512231715-516044]
	I0512 23:27:15.051674  817261 provision.go:172] copyRemoteCerts
	I0512 23:27:15.051732  817261 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0512 23:27:15.051772  817261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512231715-516044
	I0512 23:27:15.087249  817261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/calico-20220512231715-516044/id_rsa Username:docker}
	I0512 23:27:15.185381  817261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0512 23:27:15.206039  817261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0512 23:27:15.225728  817261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0512 23:27:15.245345  817261 provision.go:86] duration metric: configureAuth took 348.430895ms
	I0512 23:27:15.245372  817261 ubuntu.go:193] setting minikube options for container-runtime
	I0512 23:27:15.245533  817261 config.go:178] Loaded profile config "calico-20220512231715-516044": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 23:27:15.245583  817261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512231715-516044
	I0512 23:27:15.279976  817261 main.go:134] libmachine: Using SSH client type: native
	I0512 23:27:15.280138  817261 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0512 23:27:15.280155  817261 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0512 23:27:15.417599  817261 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0512 23:27:15.417628  817261 ubuntu.go:71] root file system type: overlay
	I0512 23:27:15.417850  817261 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0512 23:27:15.417932  817261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512231715-516044
	I0512 23:27:15.461613  817261 main.go:134] libmachine: Using SSH client type: native
	I0512 23:27:15.461799  817261 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0512 23:27:15.461900  817261 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0512 23:27:15.602578  817261 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0512 23:27:15.602659  817261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512231715-516044
	I0512 23:27:15.636896  817261 main.go:134] libmachine: Using SSH client type: native
	I0512 23:27:15.637084  817261 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0512 23:27:15.637152  817261 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0512 23:27:16.515460  817261 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-05 13:17:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-12 23:27:15.598221161 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0512 23:27:16.515493  817261 machine.go:91] provisioned docker machine in 1.969668041s
	I0512 23:27:16.515505  817261 client.go:171] LocalClient.Create took 10.750220201s
	I0512 23:27:16.515518  817261 start.go:173] duration metric: libmachine.API.Create for "calico-20220512231715-516044" took 10.75028678s
	I0512 23:27:16.515527  817261 start.go:306] post-start starting for "calico-20220512231715-516044" (driver="docker")
	I0512 23:27:16.515537  817261 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0512 23:27:16.515601  817261 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0512 23:27:16.515687  817261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512231715-516044
	I0512 23:27:16.550481  817261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/calico-20220512231715-516044/id_rsa Username:docker}
	I0512 23:27:16.666001  817261 ssh_runner.go:195] Run: cat /etc/os-release
	I0512 23:27:16.668950  817261 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0512 23:27:16.668977  817261 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0512 23:27:16.668989  817261 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0512 23:27:16.668998  817261 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0512 23:27:16.669013  817261 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/addons for local assets ...
	I0512 23:27:16.669076  817261 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/files for local assets ...
	I0512 23:27:16.669221  817261 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/files/etc/ssl/certs/5160442.pem -> 5160442.pem in /etc/ssl/certs
	I0512 23:27:16.669347  817261 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0512 23:27:16.685761  817261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/files/etc/ssl/certs/5160442.pem --> /etc/ssl/certs/5160442.pem (1708 bytes)
	I0512 23:27:16.709981  817261 start.go:309] post-start completed in 194.435111ms
	I0512 23:27:16.710306  817261 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220512231715-516044
	I0512 23:27:16.741900  817261 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/calico-20220512231715-516044/config.json ...
	I0512 23:27:16.742194  817261 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0512 23:27:16.742245  817261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512231715-516044
	I0512 23:27:16.774845  817261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/calico-20220512231715-516044/id_rsa Username:docker}
	I0512 23:27:16.873928  817261 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0512 23:27:16.879884  817261 start.go:134] duration metric: createHost completed in 11.118155756s
	I0512 23:27:16.879916  817261 start.go:81] releasing machines lock for "calico-20220512231715-516044", held for 11.11834313s
	I0512 23:27:16.880011  817261 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220512231715-516044
	I0512 23:27:16.919543  817261 ssh_runner.go:195] Run: systemctl --version
	I0512 23:27:16.919583  817261 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0512 23:27:16.919607  817261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512231715-516044
	I0512 23:27:16.919633  817261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512231715-516044
	I0512 23:27:16.961254  817261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/calico-20220512231715-516044/id_rsa Username:docker}
	I0512 23:27:16.962622  817261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/calico-20220512231715-516044/id_rsa Username:docker}
	I0512 23:27:17.090598  817261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0512 23:27:17.105045  817261 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 23:27:17.117376  817261 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0512 23:27:17.117461  817261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0512 23:27:17.130173  817261 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0512 23:27:17.145355  817261 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0512 23:27:17.252646  817261 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0512 23:27:17.335317  817261 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 23:27:17.346291  817261 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0512 23:27:17.433383  817261 ssh_runner.go:195] Run: sudo systemctl start docker
	I0512 23:27:17.442993  817261 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 23:27:17.486644  817261 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 23:27:17.531986  817261 out.go:204] * Preparing Kubernetes v1.23.5 on Docker 20.10.15 ...
	I0512 23:27:17.532080  817261 cli_runner.go:164] Run: docker network inspect calico-20220512231715-516044 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0512 23:27:17.574499  817261 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0512 23:27:17.578694  817261 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0512 23:27:17.590955  817261 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 23:27:17.591019  817261 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0512 23:27:17.630868  817261 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0512 23:27:17.630897  817261 docker.go:541] Images already preloaded, skipping extraction
	I0512 23:27:17.630957  817261 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0512 23:27:17.669893  817261 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0512 23:27:17.669916  817261 cache_images.go:84] Images are preloaded, skipping loading
	I0512 23:27:17.669961  817261 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0512 23:27:17.782291  817261 cni.go:95] Creating CNI manager for "calico"
	I0512 23:27:17.782327  817261 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0512 23:27:17.782349  817261 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20220512231715-516044 NodeName:calico-20220512231715-516044 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/li
b/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0512 23:27:17.782526  817261 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "calico-20220512231715-516044"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0512 23:27:17.782643  817261 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=calico-20220512231715-516044 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:calico-20220512231715-516044 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0512 23:27:17.782711  817261 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0512 23:27:17.791810  817261 binaries.go:44] Found k8s binaries, skipping transfer
	I0512 23:27:17.791878  817261 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0512 23:27:17.809187  817261 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0512 23:27:17.833331  817261 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0512 23:27:17.846857  817261 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2050 bytes)
	I0512 23:27:17.860631  817261 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0512 23:27:17.863818  817261 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0512 23:27:17.880089  817261 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/calico-20220512231715-516044 for IP: 192.168.76.2
	I0512 23:27:17.880219  817261 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/ca.key
	I0512 23:27:17.880282  817261 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/proxy-client-ca.key
	I0512 23:27:17.880346  817261 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/calico-20220512231715-516044/client.key
	I0512 23:27:17.880361  817261 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/calico-20220512231715-516044/client.crt with IP's: []
	I0512 23:27:18.068660  817261 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/calico-20220512231715-516044/client.crt ...
	I0512 23:27:18.068693  817261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/calico-20220512231715-516044/client.crt: {Name:mkb9d29389612554cd3af71fa5f0ec968200c0eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:27:18.068887  817261 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/calico-20220512231715-516044/client.key ...
	I0512 23:27:18.068905  817261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/calico-20220512231715-516044/client.key: {Name:mka84b9fd4ddeb30676fb499ad2cd8677c279212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:27:18.069022  817261 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/calico-20220512231715-516044/apiserver.key.31bdca25
	I0512 23:27:18.069038  817261 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/calico-20220512231715-516044/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0512 23:27:18.400662  817261 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/calico-20220512231715-516044/apiserver.crt.31bdca25 ...
	I0512 23:27:18.400710  817261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/calico-20220512231715-516044/apiserver.crt.31bdca25: {Name:mk971b6a8b7f6dac152c99f2f4704aaf85a8ef41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:27:18.400914  817261 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/calico-20220512231715-516044/apiserver.key.31bdca25 ...
	I0512 23:27:18.400930  817261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/calico-20220512231715-516044/apiserver.key.31bdca25: {Name:mk0164200d30e48815d98d048e30d10b8d46a798 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:27:18.401045  817261 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/calico-20220512231715-516044/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/calico-20220512231715-516044/apiserver.crt
	I0512 23:27:18.401143  817261 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/calico-20220512231715-516044/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/calico-20220512231715-516044/apiserver.key
	I0512 23:27:18.401232  817261 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/calico-20220512231715-516044/proxy-client.key
	I0512 23:27:18.401256  817261 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/calico-20220512231715-516044/proxy-client.crt with IP's: []
	I0512 23:27:18.519207  817261 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/calico-20220512231715-516044/proxy-client.crt ...
	I0512 23:27:18.519240  817261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/calico-20220512231715-516044/proxy-client.crt: {Name:mkcac354bc4a338114eb4751649124631af1d62b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:27:18.519460  817261 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/calico-20220512231715-516044/proxy-client.key ...
	I0512 23:27:18.519483  817261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/calico-20220512231715-516044/proxy-client.key: {Name:mk6679a4ba39fc8de44ba5f7323b9749e139671a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:27:18.519759  817261 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/516044.pem (1338 bytes)
	W0512 23:27:18.519828  817261 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/516044_empty.pem, impossibly tiny 0 bytes
	I0512 23:27:18.519842  817261 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca-key.pem (1679 bytes)
	I0512 23:27:18.519875  817261 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca.pem (1078 bytes)
	I0512 23:27:18.519901  817261 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/cert.pem (1123 bytes)
	I0512 23:27:18.519924  817261 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/key.pem (1675 bytes)
	I0512 23:27:18.519965  817261 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/files/etc/ssl/certs/5160442.pem (1708 bytes)
	I0512 23:27:18.520582  817261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/calico-20220512231715-516044/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0512 23:27:18.540554  817261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/calico-20220512231715-516044/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0512 23:27:18.558253  817261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/calico-20220512231715-516044/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0512 23:27:18.577808  817261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/calico-20220512231715-516044/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0512 23:27:18.603062  817261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0512 23:27:18.626822  817261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0512 23:27:18.646829  817261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0512 23:27:18.664908  817261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0512 23:27:18.689007  817261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0512 23:27:18.716845  817261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/516044.pem --> /usr/share/ca-certificates/516044.pem (1338 bytes)
	I0512 23:27:18.781788  817261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/files/etc/ssl/certs/5160442.pem --> /usr/share/ca-certificates/5160442.pem (1708 bytes)
	I0512 23:27:18.806468  817261 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0512 23:27:18.824214  817261 ssh_runner.go:195] Run: openssl version
	I0512 23:27:18.829389  817261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5160442.pem && ln -fs /usr/share/ca-certificates/5160442.pem /etc/ssl/certs/5160442.pem"
	I0512 23:27:18.837002  817261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5160442.pem
	I0512 23:27:18.840029  817261 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 12 22:55 /usr/share/ca-certificates/5160442.pem
	I0512 23:27:18.840096  817261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5160442.pem
	I0512 23:27:18.844815  817261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5160442.pem /etc/ssl/certs/3ec20f2e.0"
	I0512 23:27:18.851825  817261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0512 23:27:18.858893  817261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0512 23:27:18.861756  817261 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 12 22:51 /usr/share/ca-certificates/minikubeCA.pem
	I0512 23:27:18.861792  817261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0512 23:27:18.866541  817261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0512 23:27:18.874610  817261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516044.pem && ln -fs /usr/share/ca-certificates/516044.pem /etc/ssl/certs/516044.pem"
	I0512 23:27:18.885510  817261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516044.pem
	I0512 23:27:18.889693  817261 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 12 22:55 /usr/share/ca-certificates/516044.pem
	I0512 23:27:18.889757  817261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516044.pem
	I0512 23:27:18.895155  817261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516044.pem /etc/ssl/certs/51391683.0"
	I0512 23:27:18.904769  817261 kubeadm.go:391] StartCluster: {Name:calico-20220512231715-516044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:calico-20220512231715-516044 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false}
	I0512 23:27:18.904912  817261 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0512 23:27:18.941582  817261 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0512 23:27:18.948822  817261 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0512 23:27:18.955645  817261 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0512 23:27:18.955695  817261 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0512 23:27:18.962511  817261 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0512 23:27:18.962554  817261 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0512 23:27:30.765887  817261 out.go:204]   - Generating certificates and keys ...
	I0512 23:27:30.769026  817261 out.go:204]   - Booting up control plane ...
	I0512 23:27:30.771789  817261 out.go:204]   - Configuring RBAC rules ...
	I0512 23:27:30.773890  817261 cni.go:95] Creating CNI manager for "calico"
	I0512 23:27:30.775519  817261 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0512 23:27:30.777015  817261 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0512 23:27:30.777038  817261 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202049 bytes)
	I0512 23:27:30.797905  817261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0512 23:27:32.417024  817261 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.619069141s)
	I0512 23:27:32.417074  817261 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0512 23:27:32.417209  817261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=5812f8ec06db4997111dc3269784a7f664662f05 minikube.k8s.io/name=calico-20220512231715-516044 minikube.k8s.io/updated_at=2022_05_12T23_27_32_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:27:32.417209  817261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:27:32.498406  817261 ops.go:34] apiserver oom_adj: -16
	I0512 23:27:32.498493  817261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:27:33.092170  817261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:27:33.591633  817261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:27:34.092485  817261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:27:34.592502  817261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:27:35.091884  817261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:27:35.591557  817261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:27:36.091700  817261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:27:36.592377  817261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:27:37.091968  817261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:27:37.592527  817261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:27:38.092485  817261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:27:38.592566  817261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:27:39.093580  817261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:27:39.592478  817261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:27:40.091959  817261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:27:40.592568  817261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:27:41.092595  817261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:27:41.591700  817261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:27:42.091834  817261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:27:43.092283  817261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:27:43.591620  817261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:27:44.092446  817261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:27:44.592631  817261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:27:44.900374  817261 kubeadm.go:1020] duration metric: took 12.483220371s to wait for elevateKubeSystemPrivileges.
	I0512 23:27:44.900414  817261 kubeadm.go:393] StartCluster complete in 25.995655079s
	I0512 23:27:44.900439  817261 settings.go:142] acquiring lock: {Name:mkfe717360cf8b2fa45465ab4bd68ece68561c89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:27:44.900554  817261 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/kubeconfig
	I0512 23:27:44.902103  817261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/kubeconfig: {Name:mk0f3828db53b6683822ca2fe8148b87d561cdb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:27:45.690229  817261 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20220512231715-516044" rescaled to 1
	I0512 23:27:45.690306  817261 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0512 23:27:45.729659  817261 out.go:177] * Verifying Kubernetes components...
	I0512 23:27:45.690356  817261 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0512 23:27:45.690388  817261 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0512 23:27:45.690541  817261 config.go:178] Loaded profile config "calico-20220512231715-516044": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 23:27:45.827390  817261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 23:27:45.827414  817261 addons.go:65] Setting default-storageclass=true in profile "calico-20220512231715-516044"
	I0512 23:27:45.827458  817261 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20220512231715-516044"
	I0512 23:27:45.827396  817261 addons.go:65] Setting storage-provisioner=true in profile "calico-20220512231715-516044"
	I0512 23:27:45.827497  817261 addons.go:153] Setting addon storage-provisioner=true in "calico-20220512231715-516044"
	W0512 23:27:45.827511  817261 addons.go:165] addon storage-provisioner should already be in state true
	I0512 23:27:45.827561  817261 host.go:66] Checking if "calico-20220512231715-516044" exists ...
	I0512 23:27:45.827880  817261 cli_runner.go:164] Run: docker container inspect calico-20220512231715-516044 --format={{.State.Status}}
	I0512 23:27:45.828079  817261 cli_runner.go:164] Run: docker container inspect calico-20220512231715-516044 --format={{.State.Status}}
	I0512 23:27:45.842079  817261 node_ready.go:35] waiting up to 5m0s for node "calico-20220512231715-516044" to be "Ready" ...
	I0512 23:27:45.923547  817261 addons.go:153] Setting addon default-storageclass=true in "calico-20220512231715-516044"
	W0512 23:27:45.935538  817261 addons.go:165] addon default-storageclass should already be in state true
	I0512 23:27:45.935580  817261 host.go:66] Checking if "calico-20220512231715-516044" exists ...
	I0512 23:27:45.924034  817261 node_ready.go:49] node "calico-20220512231715-516044" has status "Ready":"True"
	I0512 23:27:45.935641  817261 node_ready.go:38] duration metric: took 93.531897ms waiting for node "calico-20220512231715-516044" to be "Ready" ...
	I0512 23:27:45.935658  817261 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0512 23:27:45.927775  817261 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0512 23:27:45.935507  817261 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0512 23:27:45.944732  817261 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0512 23:27:45.936310  817261 cli_runner.go:164] Run: docker container inspect calico-20220512231715-516044 --format={{.State.Status}}
	I0512 23:27:45.944792  817261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0512 23:27:45.944871  817261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512231715-516044
	I0512 23:27:45.947096  817261 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace to be "Ready" ...
	I0512 23:27:46.013296  817261 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0512 23:27:46.013326  817261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0512 23:27:46.013391  817261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512231715-516044
	I0512 23:27:46.037062  817261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/calico-20220512231715-516044/id_rsa Username:docker}
	I0512 23:27:46.069761  817261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/calico-20220512231715-516044/id_rsa Username:docker}
	I0512 23:27:46.303857  817261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0512 23:27:46.388253  817261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0512 23:27:47.976532  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:27:48.079967  817261 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.144228296s)
	I0512 23:27:48.080009  817261 start.go:815] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0512 23:27:48.098530  817261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.794623435s)
	I0512 23:27:48.119254  817261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.730947495s)
	I0512 23:27:48.121290  817261 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0512 23:27:48.122581  817261 addons.go:417] enableAddons completed in 2.43220497s
	I0512 23:27:49.979247  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:27:52.468167  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:27:54.468723  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:27:56.479397  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:27:58.977889  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:00.981570  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:02.984634  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:05.480867  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:07.975297  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:10.474792  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:12.476979  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:14.978096  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:17.469878  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:19.988219  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:22.470707  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:24.968786  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:26.969151  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:28.978404  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:30.980533  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:33.476217  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:35.477295  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:37.480114  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:39.968161  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:42.476255  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:44.478289  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:46.968381  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:48.978100  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:50.978316  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:52.978356  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:55.467832  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:57.481247  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:59.967910  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:01.968652  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:03.974884  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:06.467854  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:08.476943  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:10.477925  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:12.967550  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:14.978574  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:17.468434  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:19.476036  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:21.477906  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:23.968270  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:25.977525  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:27.980380  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:30.477827  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:32.478372  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:34.976940  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:37.477931  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:39.973307  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:41.978236  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:44.468770  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:46.469633  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:48.477776  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:50.477809  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:52.478353  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:54.478745  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:56.968137  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:58.978195  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:00.978780  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:03.467812  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:05.478342  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:07.967819  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:09.968674  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:11.978136  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:14.468896  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:16.476268  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:18.478193  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:20.976047  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:23.476930  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:25.478220  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:27.976029  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:29.978870  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:32.478236  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:34.977791  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:37.476346  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:39.476793  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:41.967569  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:43.967993  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:45.976150  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:48.469934  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:50.978269  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:53.467910  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:55.978305  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:58.477955  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:00.478561  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:02.968630  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:04.977983  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:07.468152  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:09.976356  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:11.978009  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:14.478561  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:16.974816  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:18.977000  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:20.977483  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:22.978171  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:25.476942  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:27.478027  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:29.978119  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:32.477599  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:34.478157  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:36.976294  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:38.977541  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:41.469287  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:43.976024  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:45.976127  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:45.980535  817261 pod_ready.go:81] duration metric: took 4m0.033406573s waiting for pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace to be "Ready" ...
	E0512 23:31:45.980568  817261 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0512 23:31:45.980580  817261 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-wzwqd" in "kube-system" namespace to be "Ready" ...
	I0512 23:31:47.993055  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:50.494786  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:52.992187  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:54.993340  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:57.492844  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:59.494116  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:01.994121  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:03.994187  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:06.493790  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:08.992606  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:10.992702  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:12.993451  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:14.993526  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:17.493066  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:19.494355  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:21.992271  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:23.995958  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:26.492193  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:28.496052  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:30.993003  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:32.993251  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:34.993417  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:37.493752  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:39.992662  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:42.493628  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:44.993359  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:47.493754  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:49.992930  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:52.492681  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:54.492876  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:56.497018  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:58.993752  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:00.994293  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:03.492520  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:05.992842  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:07.993318  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:10.494595  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:12.993328  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:14.994587  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:16.995023  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:19.493668  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:21.494960  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:23.993780  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:26.493564  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:28.993432  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:31.494173  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:33.993491  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:36.494489  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:38.993513  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:40.993561  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:43.494198  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:45.992983  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:47.993498  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:49.994985  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:52.492030  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:54.493252  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:56.994077  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:59.493377  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:01.493645  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:03.992135  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:05.993800  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:08.492622  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:10.993910  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:12.995943  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:15.493024  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:17.494740  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:19.994647  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:22.492106  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:24.494222  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:26.991985  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:28.993367  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:31.493975  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:33.993042  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:35.993639  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:38.493741  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:40.993828  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:43.492101  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:45.492483  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:47.492918  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:49.993575  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:51.993633  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:53.996127  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:56.493410  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:58.991979  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:00.992221  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:02.994302  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:05.492246  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:07.993815  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:10.492085  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:12.493764  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:14.994580  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:17.493455  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:19.992912  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:21.993310  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:23.993577  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:26.493823  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:28.993680  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:31.493018  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:33.493992  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:35.494774  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:37.992933  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:40.493895  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:42.992772  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:44.993158  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:45.998986  817261 pod_ready.go:81] duration metric: took 4m0.018387972s waiting for pod "calico-node-wzwqd" in "kube-system" namespace to be "Ready" ...
	E0512 23:35:45.999020  817261 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0512 23:35:45.999278  817261 pod_ready.go:38] duration metric: took 8m0.063594904s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0512 23:35:46.001797  817261 out.go:177] 
	W0512 23:35:46.003500  817261 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0512 23:35:46.003521  817261 out.go:239] * 
	* 
	W0512 23:35:46.004317  817261 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0512 23:35:46.006224  817261 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (520.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (519.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p custom-weave-20220512231715-516044 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p custom-weave-20220512231715-516044 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=docker: exit status 105 (8m39.249236543s)

                                                
                                                
-- stdout --
	* [custom-weave-20220512231715-516044] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	* Using Docker driver with the root privilege
	* Starting control plane node custom-weave-20220512231715-516044 in cluster custom-weave-20220512231715-516044
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.5 on Docker 20.10.15 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring testdata/weavenet.yaml (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0512 23:27:39.574787  826131 out.go:296] Setting OutFile to fd 1 ...
	I0512 23:27:39.574971  826131 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 23:27:39.574983  826131 out.go:309] Setting ErrFile to fd 2...
	I0512 23:27:39.574988  826131 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 23:27:39.575107  826131 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/bin
	I0512 23:27:39.575416  826131 out.go:303] Setting JSON to false
	I0512 23:27:39.577579  826131 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":22216,"bootTime":1652375844,"procs":1209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0512 23:27:39.577659  826131 start.go:125] virtualization: kvm guest
	I0512 23:27:39.580362  826131 out.go:177] * [custom-weave-20220512231715-516044] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0512 23:27:39.581899  826131 out.go:177]   - MINIKUBE_LOCATION=12739
	I0512 23:27:39.581844  826131 notify.go:193] Checking for updates...
	I0512 23:27:39.583207  826131 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0512 23:27:39.584732  826131 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/kubeconfig
	I0512 23:27:39.586078  826131 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube
	I0512 23:27:39.587396  826131 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0512 23:27:39.589232  826131 config.go:178] Loaded profile config "calico-20220512231715-516044": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 23:27:39.589377  826131 config.go:178] Loaded profile config "cilium-20220512231715-516044": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 23:27:39.589523  826131 config.go:178] Loaded profile config "embed-certs-20220512231813-516044": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 23:27:39.589593  826131 driver.go:358] Setting default libvirt URI to qemu:///system
	I0512 23:27:39.643073  826131 docker.go:137] docker version: linux-20.10.16
	I0512 23:27:39.643188  826131 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 23:27:39.756099  826131 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:93 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:49 SystemTime:2022-05-12 23:27:39.676204423 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0512 23:27:39.756260  826131 docker.go:254] overlay module found
	I0512 23:27:39.758668  826131 out.go:177] * Using the docker driver based on user configuration
	I0512 23:27:39.760278  826131 start.go:284] selected driver: docker
	I0512 23:27:39.760300  826131 start.go:806] validating driver "docker" against <nil>
	I0512 23:27:39.760326  826131 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0512 23:27:39.761562  826131 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 23:27:39.888585  826131 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:93 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:49 SystemTime:2022-05-12 23:27:39.80152821 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0512 23:27:39.888697  826131 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0512 23:27:39.888918  826131 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0512 23:27:39.890878  826131 out.go:177] * Using Docker driver with the root privilege
	I0512 23:27:39.892078  826131 cni.go:95] Creating CNI manager for "testdata/weavenet.yaml"
	I0512 23:27:39.892105  826131 start_flags.go:301] Found "testdata/weavenet.yaml" CNI - setting NetworkPlugin=cni
	I0512 23:27:39.892119  826131 start_flags.go:306] config:
	{Name:custom-weave-20220512231715-516044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:custom-weave-20220512231715-516044 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0512 23:27:39.893736  826131 out.go:177] * Starting control plane node custom-weave-20220512231715-516044 in cluster custom-weave-20220512231715-516044
	I0512 23:27:39.894903  826131 cache.go:120] Beginning downloading kic base image for docker with docker
	I0512 23:27:39.896085  826131 out.go:177] * Pulling base image ...
	I0512 23:27:39.897339  826131 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 23:27:39.897378  826131 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0512 23:27:39.897386  826131 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0512 23:27:39.897404  826131 cache.go:57] Caching tarball of preloaded images
	I0512 23:27:39.897622  826131 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0512 23:27:39.897639  826131 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0512 23:27:39.897744  826131 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/custom-weave-20220512231715-516044/config.json ...
	I0512 23:27:39.897775  826131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/custom-weave-20220512231715-516044/config.json: {Name:mkc21fa20c37121847696cb596dc7de8f38d268f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:27:39.944513  826131 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon, skipping pull
	I0512 23:27:39.944557  826131 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in daemon, skipping load
	I0512 23:27:39.944574  826131 cache.go:206] Successfully downloaded all kic artifacts
	I0512 23:27:39.944626  826131 start.go:352] acquiring machines lock for custom-weave-20220512231715-516044: {Name:mk440e2df494624911cc0c1477f96bf68e246603 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0512 23:27:39.944780  826131 start.go:356] acquired machines lock for "custom-weave-20220512231715-516044" in 126.397µs
	I0512 23:27:39.944814  826131 start.go:91] Provisioning new machine with config: &{Name:custom-weave-20220512231715-516044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:custom-weave-20220512231715-516044 Na
mespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0512 23:27:39.944936  826131 start.go:131] createHost starting for "" (driver="docker")
	I0512 23:27:39.947207  826131 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0512 23:27:39.947497  826131 start.go:165] libmachine.API.Create for "custom-weave-20220512231715-516044" (driver="docker")
	I0512 23:27:39.947533  826131 client.go:168] LocalClient.Create starting
	I0512 23:27:39.947602  826131 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca.pem
	I0512 23:27:39.947635  826131 main.go:134] libmachine: Decoding PEM data...
	I0512 23:27:39.947653  826131 main.go:134] libmachine: Parsing certificate...
	I0512 23:27:39.947736  826131 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/cert.pem
	I0512 23:27:39.947764  826131 main.go:134] libmachine: Decoding PEM data...
	I0512 23:27:39.947782  826131 main.go:134] libmachine: Parsing certificate...
	I0512 23:27:39.948179  826131 cli_runner.go:164] Run: docker network inspect custom-weave-20220512231715-516044 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0512 23:27:39.981971  826131 cli_runner.go:211] docker network inspect custom-weave-20220512231715-516044 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0512 23:27:39.982070  826131 network_create.go:272] running [docker network inspect custom-weave-20220512231715-516044] to gather additional debugging logs...
	I0512 23:27:39.982102  826131 cli_runner.go:164] Run: docker network inspect custom-weave-20220512231715-516044
	W0512 23:27:40.015020  826131 cli_runner.go:211] docker network inspect custom-weave-20220512231715-516044 returned with exit code 1
	I0512 23:27:40.015068  826131 network_create.go:275] error running [docker network inspect custom-weave-20220512231715-516044]: docker network inspect custom-weave-20220512231715-516044: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: custom-weave-20220512231715-516044
	I0512 23:27:40.015091  826131 network_create.go:277] output of [docker network inspect custom-weave-20220512231715-516044]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: custom-weave-20220512231715-516044
	
	** /stderr **
	I0512 23:27:40.015165  826131 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0512 23:27:40.049840  826131 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0006fa7e0] misses:0}
	I0512 23:27:40.049898  826131 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 23:27:40.049920  826131 network_create.go:115] attempt to create docker network custom-weave-20220512231715-516044 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0512 23:27:40.049974  826131 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220512231715-516044
	I0512 23:27:40.130822  826131 network_create.go:99] docker network custom-weave-20220512231715-516044 192.168.49.0/24 created
	I0512 23:27:40.130866  826131 kic.go:106] calculated static IP "192.168.49.2" for the "custom-weave-20220512231715-516044" container
	I0512 23:27:40.130919  826131 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0512 23:27:40.168397  826131 cli_runner.go:164] Run: docker volume create custom-weave-20220512231715-516044 --label name.minikube.sigs.k8s.io=custom-weave-20220512231715-516044 --label created_by.minikube.sigs.k8s.io=true
	I0512 23:27:40.212423  826131 oci.go:103] Successfully created a docker volume custom-weave-20220512231715-516044
	I0512 23:27:40.212519  826131 cli_runner.go:164] Run: docker run --rm --name custom-weave-20220512231715-516044-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220512231715-516044 --entrypoint /usr/bin/test -v custom-weave-20220512231715-516044:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c -d /var/lib
	I0512 23:27:40.798702  826131 oci.go:107] Successfully prepared a docker volume custom-weave-20220512231715-516044
	I0512 23:27:40.798748  826131 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 23:27:40.798771  826131 kic.go:179] Starting extracting preloaded images to volume ...
	I0512 23:27:40.798851  826131 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20220512231715-516044:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c -I lz4 -xf /preloaded.tar -C /extractDir
	I0512 23:27:45.978257  826131 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20220512231715-516044:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c -I lz4 -xf /preloaded.tar -C /extractDir: (5.179329839s)
	I0512 23:27:45.978316  826131 kic.go:188] duration metric: took 5.179539 seconds to extract preloaded images to volume
	W0512 23:27:45.979081  826131 cgroups_linux.go:88] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0512 23:27:45.979630  826131 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0512 23:27:46.140855  826131 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20220512231715-516044 --name custom-weave-20220512231715-516044 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220512231715-516044 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20220512231715-516044 --network custom-weave-20220512231715-516044 --ip 192.168.49.2 --volume custom-weave-20220512231715-516044:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c
	I0512 23:27:46.802731  826131 cli_runner.go:164] Run: docker container inspect custom-weave-20220512231715-516044 --format={{.State.Running}}
	I0512 23:27:46.836934  826131 cli_runner.go:164] Run: docker container inspect custom-weave-20220512231715-516044 --format={{.State.Status}}
	I0512 23:27:46.869896  826131 cli_runner.go:164] Run: docker exec custom-weave-20220512231715-516044 stat /var/lib/dpkg/alternatives/iptables
	I0512 23:27:46.950196  826131 oci.go:144] the created container "custom-weave-20220512231715-516044" has a running status.
	I0512 23:27:46.950240  826131 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/custom-weave-20220512231715-516044/id_rsa...
	I0512 23:27:47.120392  826131 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/custom-weave-20220512231715-516044/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0512 23:27:47.236008  826131 cli_runner.go:164] Run: docker container inspect custom-weave-20220512231715-516044 --format={{.State.Status}}
	I0512 23:27:47.278622  826131 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0512 23:27:47.278656  826131 kic_runner.go:114] Args: [docker exec --privileged custom-weave-20220512231715-516044 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0512 23:27:47.422555  826131 cli_runner.go:164] Run: docker container inspect custom-weave-20220512231715-516044 --format={{.State.Status}}
	I0512 23:27:47.466338  826131 machine.go:88] provisioning docker machine ...
	I0512 23:27:47.466384  826131 ubuntu.go:169] provisioning hostname "custom-weave-20220512231715-516044"
	I0512 23:27:47.466449  826131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512231715-516044
	I0512 23:27:47.515790  826131 main.go:134] libmachine: Using SSH client type: native
	I0512 23:27:47.515998  826131 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0512 23:27:47.516032  826131 main.go:134] libmachine: About to run SSH command:
	sudo hostname custom-weave-20220512231715-516044 && echo "custom-weave-20220512231715-516044" | sudo tee /etc/hostname
	I0512 23:27:47.669971  826131 main.go:134] libmachine: SSH cmd err, output: <nil>: custom-weave-20220512231715-516044
	
	I0512 23:27:47.670074  826131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512231715-516044
	I0512 23:27:47.713848  826131 main.go:134] libmachine: Using SSH client type: native
	I0512 23:27:47.714056  826131 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0512 23:27:47.714088  826131 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-weave-20220512231715-516044' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-weave-20220512231715-516044/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-weave-20220512231715-516044' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0512 23:27:47.856950  826131 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0512 23:27:47.856987  826131 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/key.pem
ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube}
	I0512 23:27:47.857008  826131 ubuntu.go:177] setting up certificates
	I0512 23:27:47.857019  826131 provision.go:83] configureAuth start
	I0512 23:27:47.857082  826131 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220512231715-516044
	I0512 23:27:47.903536  826131 provision.go:138] copyHostCerts
	I0512 23:27:47.903631  826131 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/ca.pem, removing ...
	I0512 23:27:47.903648  826131 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/ca.pem
	I0512 23:27:47.903719  826131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/ca.pem (1078 bytes)
	I0512 23:27:47.903813  826131 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cert.pem, removing ...
	I0512 23:27:47.903829  826131 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cert.pem
	I0512 23:27:47.903864  826131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cert.pem (1123 bytes)
	I0512 23:27:47.903937  826131 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/key.pem, removing ...
	I0512 23:27:47.903950  826131 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/key.pem
	I0512 23:27:47.903982  826131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/key.pem (1675 bytes)
	I0512 23:27:47.904036  826131 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca-key.pem org=jenkins.custom-weave-20220512231715-516044 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube custom-weave-20220512231715-516044]
	I0512 23:27:48.030961  826131 provision.go:172] copyRemoteCerts
	I0512 23:27:48.031020  826131 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0512 23:27:48.031057  826131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512231715-516044
	I0512 23:27:48.063219  826131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/custom-weave-20220512231715-516044/id_rsa Username:docker}
	I0512 23:27:48.156331  826131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0512 23:27:48.174034  826131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0512 23:27:48.191245  826131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0512 23:27:48.208845  826131 provision.go:86] duration metric: configureAuth took 351.804417ms
	I0512 23:27:48.208872  826131 ubuntu.go:193] setting minikube options for container-runtime
	I0512 23:27:48.209044  826131 config.go:178] Loaded profile config "custom-weave-20220512231715-516044": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 23:27:48.209135  826131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512231715-516044
	I0512 23:27:48.242824  826131 main.go:134] libmachine: Using SSH client type: native
	I0512 23:27:48.242988  826131 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0512 23:27:48.243011  826131 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0512 23:27:48.377246  826131 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0512 23:27:48.377278  826131 ubuntu.go:71] root file system type: overlay
	I0512 23:27:48.377450  826131 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0512 23:27:48.377515  826131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512231715-516044
	I0512 23:27:48.411266  826131 main.go:134] libmachine: Using SSH client type: native
	I0512 23:27:48.411428  826131 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0512 23:27:48.411527  826131 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0512 23:27:48.555084  826131 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0512 23:27:48.555164  826131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512231715-516044
	I0512 23:27:48.596313  826131 main.go:134] libmachine: Using SSH client type: native
	I0512 23:27:48.596457  826131 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0512 23:27:48.596475  826131 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0512 23:27:49.397517  826131 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-05 13:17:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-12 23:27:48.552620599 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0512 23:27:49.397560  826131 machine.go:91] provisioned docker machine in 1.931197165s
	I0512 23:27:49.397574  826131 client.go:171] LocalClient.Create took 9.450035336s
	I0512 23:27:49.397594  826131 start.go:173] duration metric: libmachine.API.Create for "custom-weave-20220512231715-516044" took 9.450101001s
	I0512 23:27:49.397612  826131 start.go:306] post-start starting for "custom-weave-20220512231715-516044" (driver="docker")
	I0512 23:27:49.397620  826131 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0512 23:27:49.397684  826131 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0512 23:27:49.397737  826131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512231715-516044
	I0512 23:27:49.435763  826131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/custom-weave-20220512231715-516044/id_rsa Username:docker}
	I0512 23:27:49.529318  826131 ssh_runner.go:195] Run: cat /etc/os-release
	I0512 23:27:49.532147  826131 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0512 23:27:49.532185  826131 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0512 23:27:49.532199  826131 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0512 23:27:49.532208  826131 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0512 23:27:49.532221  826131 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/addons for local assets ...
	I0512 23:27:49.532288  826131 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/files for local assets ...
	I0512 23:27:49.532380  826131 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/files/etc/ssl/certs/5160442.pem -> 5160442.pem in /etc/ssl/certs
	I0512 23:27:49.532488  826131 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0512 23:27:49.539358  826131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/files/etc/ssl/certs/5160442.pem --> /etc/ssl/certs/5160442.pem (1708 bytes)
	I0512 23:27:49.593952  826131 start.go:309] post-start completed in 196.318506ms
	I0512 23:27:49.594440  826131 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220512231715-516044
	I0512 23:27:49.631300  826131 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/custom-weave-20220512231715-516044/config.json ...
	I0512 23:27:49.631568  826131 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0512 23:27:49.631619  826131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512231715-516044
	I0512 23:27:49.662180  826131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/custom-weave-20220512231715-516044/id_rsa Username:docker}
	I0512 23:27:49.757653  826131 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0512 23:27:49.761762  826131 start.go:134] duration metric: createHost completed in 9.816812393s
	I0512 23:27:49.761788  826131 start.go:81] releasing machines lock for "custom-weave-20220512231715-516044", held for 9.816991328s
	I0512 23:27:49.761867  826131 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220512231715-516044
	I0512 23:27:49.802680  826131 ssh_runner.go:195] Run: systemctl --version
	I0512 23:27:49.802744  826131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512231715-516044
	I0512 23:27:49.802760  826131 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0512 23:27:49.802840  826131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512231715-516044
	I0512 23:27:49.839847  826131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/custom-weave-20220512231715-516044/id_rsa Username:docker}
	I0512 23:27:49.840342  826131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/custom-weave-20220512231715-516044/id_rsa Username:docker}
	I0512 23:27:49.953872  826131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0512 23:27:49.963727  826131 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 23:27:49.974415  826131 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0512 23:27:49.974499  826131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0512 23:27:49.988120  826131 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0512 23:27:50.005568  826131 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0512 23:27:50.085235  826131 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0512 23:27:50.170827  826131 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 23:27:50.183023  826131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0512 23:27:50.261869  826131 ssh_runner.go:195] Run: sudo systemctl start docker
	I0512 23:27:50.281417  826131 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 23:27:50.329904  826131 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 23:27:50.372989  826131 out.go:204] * Preparing Kubernetes v1.23.5 on Docker 20.10.15 ...
	I0512 23:27:50.373132  826131 cli_runner.go:164] Run: docker network inspect custom-weave-20220512231715-516044 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0512 23:27:50.417150  826131 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0512 23:27:50.421457  826131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0512 23:27:50.433867  826131 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 23:27:50.433936  826131 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0512 23:27:50.465735  826131 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0512 23:27:50.465778  826131 docker.go:541] Images already preloaded, skipping extraction
	I0512 23:27:50.465832  826131 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0512 23:27:50.503729  826131 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0512 23:27:50.503773  826131 cache_images.go:84] Images are preloaded, skipping loading
	I0512 23:27:50.503837  826131 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0512 23:27:50.595102  826131 cni.go:95] Creating CNI manager for "testdata/weavenet.yaml"
	I0512 23:27:50.595473  826131 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0512 23:27:50.595520  826131 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-weave-20220512231715-516044 NodeName:custom-weave-20220512231715-516044 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCA
File:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0512 23:27:50.595783  826131 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "custom-weave-20220512231715-516044"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0512 23:27:50.595900  826131 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=custom-weave-20220512231715-516044 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:custom-weave-20220512231715-516044 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:}
	I0512 23:27:50.595967  826131 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0512 23:27:50.605334  826131 binaries.go:44] Found k8s binaries, skipping transfer
	I0512 23:27:50.605410  826131 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0512 23:27:50.614074  826131 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0512 23:27:50.628735  826131 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0512 23:27:50.642567  826131 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2056 bytes)
	I0512 23:27:50.655343  826131 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0512 23:27:50.658087  826131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0512 23:27:50.667059  826131 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/custom-weave-20220512231715-516044 for IP: 192.168.49.2
	I0512 23:27:50.667168  826131 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/ca.key
	I0512 23:27:50.667216  826131 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/proxy-client-ca.key
	I0512 23:27:50.667275  826131 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/custom-weave-20220512231715-516044/client.key
	I0512 23:27:50.667295  826131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/custom-weave-20220512231715-516044/client.crt with IP's: []
	I0512 23:27:50.937992  826131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/custom-weave-20220512231715-516044/client.crt ...
	I0512 23:27:50.938025  826131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/custom-weave-20220512231715-516044/client.crt: {Name:mkcfeb3626e326bd672c4a3341fdc88dde5b6d3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:27:50.938219  826131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/custom-weave-20220512231715-516044/client.key ...
	I0512 23:27:50.938231  826131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/custom-weave-20220512231715-516044/client.key: {Name:mke156790a688435158d8d2a547d82d3f8182ab4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:27:50.938322  826131 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/custom-weave-20220512231715-516044/apiserver.key.dd3b5fb2
	I0512 23:27:50.938337  826131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/custom-weave-20220512231715-516044/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0512 23:27:51.084959  826131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/custom-weave-20220512231715-516044/apiserver.crt.dd3b5fb2 ...
	I0512 23:27:51.084995  826131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/custom-weave-20220512231715-516044/apiserver.crt.dd3b5fb2: {Name:mkbf061ffb19dfb8474f7bdc79f67a1e6b086c27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:27:51.085218  826131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/custom-weave-20220512231715-516044/apiserver.key.dd3b5fb2 ...
	I0512 23:27:51.085236  826131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/custom-weave-20220512231715-516044/apiserver.key.dd3b5fb2: {Name:mkcdfaf4b27a69beac0be61a719080d633bd9acc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:27:51.085353  826131 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/custom-weave-20220512231715-516044/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/custom-weave-20220512231715-516044/apiserver.crt
	I0512 23:27:51.085446  826131 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/custom-weave-20220512231715-516044/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/custom-weave-20220512231715-516044/apiserver.key
	I0512 23:27:51.085510  826131 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/custom-weave-20220512231715-516044/proxy-client.key
	I0512 23:27:51.085534  826131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/custom-weave-20220512231715-516044/proxy-client.crt with IP's: []
	I0512 23:27:51.234774  826131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/custom-weave-20220512231715-516044/proxy-client.crt ...
	I0512 23:27:51.234805  826131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/custom-weave-20220512231715-516044/proxy-client.crt: {Name:mk687a72a5d953a59273941cfa0652c0ece68eaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:27:51.234984  826131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/custom-weave-20220512231715-516044/proxy-client.key ...
	I0512 23:27:51.234998  826131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/custom-weave-20220512231715-516044/proxy-client.key: {Name:mk4c86f96abb9061bcabc2758b4bcd62cc4b8289 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:27:51.235159  826131 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/516044.pem (1338 bytes)
	W0512 23:27:51.235197  826131 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/516044_empty.pem, impossibly tiny 0 bytes
	I0512 23:27:51.235209  826131 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca-key.pem (1679 bytes)
	I0512 23:27:51.235232  826131 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca.pem (1078 bytes)
	I0512 23:27:51.235254  826131 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/cert.pem (1123 bytes)
	I0512 23:27:51.235276  826131 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/key.pem (1675 bytes)
	I0512 23:27:51.235313  826131 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/files/etc/ssl/certs/5160442.pem (1708 bytes)
	I0512 23:27:51.235874  826131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/custom-weave-20220512231715-516044/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0512 23:27:51.254309  826131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/custom-weave-20220512231715-516044/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0512 23:27:51.271685  826131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/custom-weave-20220512231715-516044/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0512 23:27:51.296483  826131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/custom-weave-20220512231715-516044/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0512 23:27:51.316916  826131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0512 23:27:51.335661  826131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0512 23:27:51.352335  826131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0512 23:27:51.369118  826131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0512 23:27:51.389392  826131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/files/etc/ssl/certs/5160442.pem --> /usr/share/ca-certificates/5160442.pem (1708 bytes)
	I0512 23:27:51.408119  826131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0512 23:27:51.427117  826131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/516044.pem --> /usr/share/ca-certificates/516044.pem (1338 bytes)
	I0512 23:27:51.446018  826131 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0512 23:27:51.459623  826131 ssh_runner.go:195] Run: openssl version
	I0512 23:27:51.464524  826131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5160442.pem && ln -fs /usr/share/ca-certificates/5160442.pem /etc/ssl/certs/5160442.pem"
	I0512 23:27:51.472597  826131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5160442.pem
	I0512 23:27:51.475735  826131 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 12 22:55 /usr/share/ca-certificates/5160442.pem
	I0512 23:27:51.475784  826131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5160442.pem
	I0512 23:27:51.481049  826131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5160442.pem /etc/ssl/certs/3ec20f2e.0"
	I0512 23:27:51.488661  826131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0512 23:27:51.496570  826131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0512 23:27:51.499704  826131 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 12 22:51 /usr/share/ca-certificates/minikubeCA.pem
	I0512 23:27:51.499742  826131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0512 23:27:51.504632  826131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0512 23:27:51.512829  826131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516044.pem && ln -fs /usr/share/ca-certificates/516044.pem /etc/ssl/certs/516044.pem"
	I0512 23:27:51.520465  826131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516044.pem
	I0512 23:27:51.523573  826131 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 12 22:55 /usr/share/ca-certificates/516044.pem
	I0512 23:27:51.523622  826131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516044.pem
	I0512 23:27:51.528379  826131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516044.pem /etc/ssl/certs/51391683.0"
	I0512 23:27:51.535709  826131 kubeadm.go:391] StartCluster: {Name:custom-weave-20220512231715-516044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:custom-weave-20220512231715-516044 Namespace:default APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false}
	I0512 23:27:51.535824  826131 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0512 23:27:51.569015  826131 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0512 23:27:51.576862  826131 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0512 23:27:51.584414  826131 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0512 23:27:51.584472  826131 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0512 23:27:51.591614  826131 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0512 23:27:51.591657  826131 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0512 23:27:52.173047  826131 out.go:204]   - Generating certificates and keys ...
	I0512 23:27:54.610337  826131 out.go:204]   - Booting up control plane ...
	I0512 23:28:02.662166  826131 out.go:204]   - Configuring RBAC rules ...
	I0512 23:28:03.086203  826131 cni.go:95] Creating CNI manager for "testdata/weavenet.yaml"
	I0512 23:28:03.102046  826131 out.go:177] * Configuring testdata/weavenet.yaml (Container Networking Interface) ...
	I0512 23:28:03.111783  826131 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0512 23:28:03.111850  826131 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I0512 23:28:03.116903  826131 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/tmp/minikube/cni.yaml': No such file or directory
	I0512 23:28:03.116944  826131 ssh_runner.go:362] scp testdata/weavenet.yaml --> /var/tmp/minikube/cni.yaml (10948 bytes)
	I0512 23:28:03.167824  826131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0512 23:28:04.531775  826131 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.363901445s)
	I0512 23:28:04.531859  826131 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0512 23:28:04.532009  826131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:04.532093  826131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=5812f8ec06db4997111dc3269784a7f664662f05 minikube.k8s.io/name=custom-weave-20220512231715-516044 minikube.k8s.io/updated_at=2022_05_12T23_28_04_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:04.642385  826131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:04.642662  826131 ops.go:34] apiserver oom_adj: -16
	I0512 23:28:05.225255  826131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:05.724772  826131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:06.225253  826131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:06.725033  826131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:07.225276  826131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:07.725521  826131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:08.225599  826131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:08.725278  826131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:09.225661  826131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:09.725186  826131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:10.224891  826131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:10.725365  826131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:11.225029  826131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:11.724775  826131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:12.224653  826131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:12.724670  826131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:13.225676  826131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:13.724764  826131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:14.224698  826131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:14.725533  826131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:15.225247  826131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:15.725656  826131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:16.224719  826131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:16.724668  826131 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:16.790508  826131 kubeadm.go:1020] duration metric: took 12.258536615s to wait for elevateKubeSystemPrivileges.
	I0512 23:28:16.790550  826131 kubeadm.go:393] StartCluster complete in 25.254866449s
	I0512 23:28:16.790576  826131 settings.go:142] acquiring lock: {Name:mkfe717360cf8b2fa45465ab4bd68ece68561c89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:28:16.790706  826131 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/kubeconfig
	I0512 23:28:16.792799  826131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/kubeconfig: {Name:mk0f3828db53b6683822ca2fe8148b87d561cdb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:28:17.316706  826131 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "custom-weave-20220512231715-516044" rescaled to 1
	I0512 23:28:17.316771  826131 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0512 23:28:17.320841  826131 out.go:177] * Verifying Kubernetes components...
	I0512 23:28:17.316850  826131 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0512 23:28:17.317127  826131 config.go:178] Loaded profile config "custom-weave-20220512231715-516044": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 23:28:17.317149  826131 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0512 23:28:17.325228  826131 addons.go:65] Setting storage-provisioner=true in profile "custom-weave-20220512231715-516044"
	I0512 23:28:17.325245  826131 addons.go:153] Setting addon storage-provisioner=true in "custom-weave-20220512231715-516044"
	W0512 23:28:17.325250  826131 addons.go:165] addon storage-provisioner should already be in state true
	I0512 23:28:17.325307  826131 host.go:66] Checking if "custom-weave-20220512231715-516044" exists ...
	I0512 23:28:17.325957  826131 cli_runner.go:164] Run: docker container inspect custom-weave-20220512231715-516044 --format={{.State.Status}}
	I0512 23:28:17.326161  826131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 23:28:17.326281  826131 addons.go:65] Setting default-storageclass=true in profile "custom-weave-20220512231715-516044"
	I0512 23:28:17.326304  826131 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-weave-20220512231715-516044"
	I0512 23:28:17.326607  826131 cli_runner.go:164] Run: docker container inspect custom-weave-20220512231715-516044 --format={{.State.Status}}
	I0512 23:28:17.389506  826131 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0512 23:28:17.395269  826131 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0512 23:28:17.395299  826131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0512 23:28:17.395364  826131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512231715-516044
	I0512 23:28:17.412066  826131 addons.go:153] Setting addon default-storageclass=true in "custom-weave-20220512231715-516044"
	W0512 23:28:17.412093  826131 addons.go:165] addon default-storageclass should already be in state true
	I0512 23:28:17.412126  826131 host.go:66] Checking if "custom-weave-20220512231715-516044" exists ...
	I0512 23:28:17.412652  826131 cli_runner.go:164] Run: docker container inspect custom-weave-20220512231715-516044 --format={{.State.Status}}
	I0512 23:28:17.457761  826131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/custom-weave-20220512231715-516044/id_rsa Username:docker}
	I0512 23:28:17.483771  826131 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0512 23:28:17.483806  826131 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0512 23:28:17.483859  826131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512231715-516044
	I0512 23:28:17.495287  826131 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0512 23:28:17.496520  826131 node_ready.go:35] waiting up to 5m0s for node "custom-weave-20220512231715-516044" to be "Ready" ...
	I0512 23:28:17.500272  826131 node_ready.go:49] node "custom-weave-20220512231715-516044" has status "Ready":"True"
	I0512 23:28:17.500291  826131 node_ready.go:38] duration metric: took 3.68698ms waiting for node "custom-weave-20220512231715-516044" to be "Ready" ...
	I0512 23:28:17.500307  826131 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0512 23:28:17.511720  826131 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-rqv6q" in "kube-system" namespace to be "Ready" ...
	I0512 23:28:17.534282  826131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/custom-weave-20220512231715-516044/id_rsa Username:docker}
	I0512 23:28:17.591687  826131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0512 23:28:17.698994  826131 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0512 23:28:17.910626  826131 start.go:815] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0512 23:28:18.116655  826131 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0512 23:28:18.118255  826131 addons.go:417] enableAddons completed in 801.104581ms
	I0512 23:28:19.527104  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:22.027467  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:24.525738  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:26.529202  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:29.027012  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:31.526546  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:34.026340  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:36.027893  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:38.526607  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:41.026112  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:43.529445  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:46.024526  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:48.025394  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:50.026193  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:52.026387  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:54.026539  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:56.027207  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:58.526504  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:01.025356  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:03.026550  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:05.525281  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:07.525885  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:09.525957  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:11.526121  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:13.526224  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:15.526389  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:18.026859  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:20.526747  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:23.025859  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:25.028740  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:27.525173  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:29.525288  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:31.525370  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:33.525647  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:36.024944  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:38.026490  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:40.026678  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:42.526824  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:45.026505  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:47.026790  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:49.525228  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:51.526288  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:53.527364  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:56.025845  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:58.526119  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:00.526555  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:03.026212  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:05.525491  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:07.526193  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:10.024927  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:12.027990  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:14.525619  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:16.526445  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:19.025845  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:21.025892  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:23.026269  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:25.027037  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:27.526655  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:30.025971  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:32.027387  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:34.529128  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:37.024749  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:39.526504  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:41.530576  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:44.025150  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:46.025375  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:48.026024  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:50.526517  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:53.024955  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:55.025551  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:57.025734  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:59.025853  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:01.527251  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:04.024835  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:06.026574  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:08.026806  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:10.525360  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:12.526262  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:15.025676  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:17.525442  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:19.525809  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:22.025894  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:24.526111  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:27.026106  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:29.524635  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:31.525848  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:33.526156  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:35.526679  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:38.025486  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:40.026532  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:42.524703  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:44.526214  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:47.026159  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:49.524853  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:51.525654  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:53.526275  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:55.526545  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:58.025410  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:00.025485  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:02.026046  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:04.026390  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:06.524790  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:08.525709  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:11.025169  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:13.025734  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:15.525796  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:17.528719  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:17.528741  826131 pod_ready.go:81] duration metric: took 4m0.016989164s waiting for pod "coredns-64897985d-rqv6q" in "kube-system" namespace to be "Ready" ...
	E0512 23:32:17.528749  826131 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0512 23:32:17.528757  826131 pod_ready.go:78] waiting up to 5m0s for pod "etcd-custom-weave-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:17.532726  826131 pod_ready.go:92] pod "etcd-custom-weave-20220512231715-516044" in "kube-system" namespace has status "Ready":"True"
	I0512 23:32:17.532794  826131 pod_ready.go:81] duration metric: took 4.02892ms waiting for pod "etcd-custom-weave-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:17.532823  826131 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-custom-weave-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:17.537261  826131 pod_ready.go:92] pod "kube-apiserver-custom-weave-20220512231715-516044" in "kube-system" namespace has status "Ready":"True"
	I0512 23:32:17.537283  826131 pod_ready.go:81] duration metric: took 4.440767ms waiting for pod "kube-apiserver-custom-weave-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:17.537295  826131 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-custom-weave-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:17.541278  826131 pod_ready.go:92] pod "kube-controller-manager-custom-weave-20220512231715-516044" in "kube-system" namespace has status "Ready":"True"
	I0512 23:32:17.541296  826131 pod_ready.go:81] duration metric: took 3.994407ms waiting for pod "kube-controller-manager-custom-weave-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:17.541305  826131 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-2qmfq" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:17.922912  826131 pod_ready.go:92] pod "kube-proxy-2qmfq" in "kube-system" namespace has status "Ready":"True"
	I0512 23:32:17.922939  826131 pod_ready.go:81] duration metric: took 381.627854ms waiting for pod "kube-proxy-2qmfq" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:17.922952  826131 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-custom-weave-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:18.322734  826131 pod_ready.go:92] pod "kube-scheduler-custom-weave-20220512231715-516044" in "kube-system" namespace has status "Ready":"True"
	I0512 23:32:18.322762  826131 pod_ready.go:81] duration metric: took 399.801441ms waiting for pod "kube-scheduler-custom-weave-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:18.322776  826131 pod_ready.go:78] waiting up to 5m0s for pod "weave-net-64z47" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:20.728907  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:22.730221  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:25.230961  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:27.729389  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:30.229507  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:32.230528  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:34.729190  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:36.729835  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:38.730504  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:41.229145  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:43.729326  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:45.731148  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:48.229060  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:50.229125  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:52.230365  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:54.730409  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:57.229864  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:59.230122  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:01.235883  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:03.729305  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:06.229067  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:08.230363  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:10.730162  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:13.229308  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:15.229855  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:17.728702  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:19.729591  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:22.230313  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:24.230648  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:26.729472  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:29.230314  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:31.231419  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:33.730166  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:36.231229  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:38.731212  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:41.230855  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:43.730254  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:46.229474  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:48.230329  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:50.728820  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:52.729856  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:55.230098  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:33:57.729169  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:00.229871  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:02.729988  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:05.228888  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:07.729373  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:09.729411  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:11.730175  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:14.230692  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:16.730262  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:19.230359  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:21.230513  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:23.729667  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:26.229875  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:28.729847  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:30.730155  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:33.229804  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:35.230012  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:37.230411  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:39.731011  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:42.228348  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:44.230160  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:46.733655  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:49.229483  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:51.728862  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:53.729501  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:56.230171  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:34:58.230419  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:00.729864  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:03.229541  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:05.730240  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:08.229287  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:10.229953  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:12.230883  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:14.729685  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:16.730173  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:18.730572  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:21.229601  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:23.229869  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:25.229913  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:27.729368  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:29.730051  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:32.230552  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:34.730678  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:36.730888  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:39.230810  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:41.728941  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:44.230019  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:46.230672  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:48.730347  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:51.229806  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:53.729126  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:56.229276  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:35:58.229801  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:36:00.728717  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:36:02.729227  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:36:05.229922  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:36:07.728714  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:36:09.729165  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:36:11.729337  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:36:13.730229  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:36:16.228640  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:36:18.229858  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:36:18.733301  826131 pod_ready.go:81] duration metric: took 4m0.410510494s waiting for pod "weave-net-64z47" in "kube-system" namespace to be "Ready" ...
	E0512 23:36:18.733331  826131 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0512 23:36:18.733338  826131 pod_ready.go:38] duration metric: took 8m1.233015594s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0512 23:36:18.733370  826131 api_server.go:51] waiting for apiserver process to appear ...
	I0512 23:36:18.736272  826131 out.go:177] 
	W0512 23:36:18.737908  826131 out.go:239] X Exiting due to K8S_APISERVER_MISSING: wait 5m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 5m0s for node: wait for apiserver proc: apiserver process never appeared
	W0512 23:36:18.737995  826131 out.go:239] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W0512 23:36:18.738012  826131 out.go:239] * Related issues:
	* Related issues:
	W0512 23:36:18.738062  826131 out.go:239]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W0512 23:36:18.738116  826131 out.go:239]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I0512 23:36:18.739822  826131 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 105
--- FAIL: TestNetworkPlugins/group/custom-weave/Start (519.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (360.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default
E0512 23:29:11.886883  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/ingress-addon-legacy-20220512225758-516044/client.crt: no such file or directory
E0512 23:29:12.759245  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/no-preload-20220512231753-516044/client.crt: no such file or directory
E0512 23:29:13.418333  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/default-k8s-different-port-20220512231821-516044/client.crt: no such file or directory
E0512 23:29:23.659098  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/default-k8s-different-port-20220512231821-516044/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.150463938s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default
E0512 23:29:33.240443  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/no-preload-20220512231753-516044/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.143688841s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default
E0512 23:29:44.139902  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/default-k8s-different-port-20220512231821-516044/client.crt: no such file or directory
E0512 23:29:44.557198  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/old-k8s-version-20220512231738-516044/client.crt: no such file or directory
E0512 23:29:44.562509  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/old-k8s-version-20220512231738-516044/client.crt: no such file or directory
E0512 23:29:44.572766  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/old-k8s-version-20220512231738-516044/client.crt: no such file or directory
E0512 23:29:44.593162  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/old-k8s-version-20220512231738-516044/client.crt: no such file or directory
E0512 23:29:44.633463  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/old-k8s-version-20220512231738-516044/client.crt: no such file or directory
E0512 23:29:44.713814  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/old-k8s-version-20220512231738-516044/client.crt: no such file or directory
E0512 23:29:44.874292  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/old-k8s-version-20220512231738-516044/client.crt: no such file or directory
E0512 23:29:45.194935  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/old-k8s-version-20220512231738-516044/client.crt: no such file or directory
E0512 23:29:45.835613  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/old-k8s-version-20220512231738-516044/client.crt: no such file or directory
E0512 23:29:47.116185  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/old-k8s-version-20220512231738-516044/client.crt: no such file or directory
E0512 23:29:49.676629  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/old-k8s-version-20220512231738-516044/client.crt: no such file or directory
E0512 23:29:54.797555  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/old-k8s-version-20220512231738-516044/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.148419668s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default
E0512 23:30:05.038060  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/old-k8s-version-20220512231738-516044/client.crt: no such file or directory
E0512 23:30:14.201500  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/no-preload-20220512231753-516044/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.152091774s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default
E0512 23:30:25.100423  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/default-k8s-different-port-20220512231821-516044/client.crt: no such file or directory
E0512 23:30:25.518569  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/old-k8s-version-20220512231738-516044/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.143027873s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.157337552s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default
E0512 23:31:06.479420  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/old-k8s-version-20220512231738-516044/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.159707807s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0512 23:31:16.391322  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/auto-20220512231715-516044/client.crt: no such file or directory
E0512 23:31:16.396626  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/auto-20220512231715-516044/client.crt: no such file or directory
E0512 23:31:16.406863  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/auto-20220512231715-516044/client.crt: no such file or directory
E0512 23:31:16.427128  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/auto-20220512231715-516044/client.crt: no such file or directory
E0512 23:31:16.467367  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/auto-20220512231715-516044/client.crt: no such file or directory
E0512 23:31:16.547666  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/auto-20220512231715-516044/client.crt: no such file or directory
E0512 23:31:16.708008  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/auto-20220512231715-516044/client.crt: no such file or directory
E0512 23:31:17.029201  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/auto-20220512231715-516044/client.crt: no such file or directory
E0512 23:31:17.669973  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/auto-20220512231715-516044/client.crt: no such file or directory
E0512 23:31:18.950464  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/auto-20220512231715-516044/client.crt: no such file or directory
E0512 23:31:21.511376  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/auto-20220512231715-516044/client.crt: no such file or directory
E0512 23:31:26.319721  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/addons-20220512225124-516044/client.crt: no such file or directory
E0512 23:31:26.632333  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/auto-20220512231715-516044/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default
E0512 23:31:36.122331  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/no-preload-20220512231753-516044/client.crt: no such file or directory
E0512 23:31:36.873161  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/auto-20220512231715-516044/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.163648761s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0512 23:31:47.021193  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/default-k8s-different-port-20220512231821-516044/client.crt: no such file or directory
E0512 23:31:57.353387  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/auto-20220512231715-516044/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default
E0512 23:32:07.582291  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/functional-20220512225541-516044/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.150438028s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0512 23:32:20.347718  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/false-20220512231715-516044/client.crt: no such file or directory
E0512 23:32:20.353001  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/false-20220512231715-516044/client.crt: no such file or directory
E0512 23:32:20.363253  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/false-20220512231715-516044/client.crt: no such file or directory
E0512 23:32:20.383524  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/false-20220512231715-516044/client.crt: no such file or directory
E0512 23:32:20.423794  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/false-20220512231715-516044/client.crt: no such file or directory
E0512 23:32:20.504068  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/false-20220512231715-516044/client.crt: no such file or directory
E0512 23:32:20.664495  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/false-20220512231715-516044/client.crt: no such file or directory
E0512 23:32:20.985944  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/false-20220512231715-516044/client.crt: no such file or directory
E0512 23:32:21.626182  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/false-20220512231715-516044/client.crt: no such file or directory
E0512 23:32:22.906651  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/false-20220512231715-516044/client.crt: no such file or directory
E0512 23:32:25.467797  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/false-20220512231715-516044/client.crt: no such file or directory
E0512 23:32:28.400511  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/old-k8s-version-20220512231738-516044/client.crt: no such file or directory
E0512 23:32:30.588450  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/false-20220512231715-516044/client.crt: no such file or directory
E0512 23:32:38.314068  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/auto-20220512231715-516044/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.172065931s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0512 23:33:08.336848  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/cilium-20220512231715-516044/client.crt: no such file or directory
E0512 23:33:18.577645  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/cilium-20220512231715-516044/client.crt: no such file or directory
E0512 23:33:21.748434  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/skaffold-20220512231237-516044/client.crt: no such file or directory
E0512 23:33:23.272973  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/addons-20220512225124-516044/client.crt: no such file or directory
E0512 23:33:39.058837  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/cilium-20220512231715-516044/client.crt: no such file or directory
E0512 23:33:42.271389  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/false-20220512231715-516044/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default
E0512 23:33:52.278335  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/no-preload-20220512231753-516044/client.crt: no such file or directory
E0512 23:34:00.235176  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/auto-20220512231715-516044/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.147771331s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0512 23:34:03.178353  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/default-k8s-different-port-20220512231821-516044/client.crt: no such file or directory
E0512 23:34:11.886713  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/ingress-addon-legacy-20220512225758-516044/client.crt: no such file or directory
E0512 23:34:19.962903  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/no-preload-20220512231753-516044/client.crt: no such file or directory
E0512 23:34:20.020065  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/cilium-20220512231715-516044/client.crt: no such file or directory
E0512 23:34:30.861492  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/default-k8s-different-port-20220512231821-516044/client.crt: no such file or directory
E0512 23:34:44.557447  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/old-k8s-version-20220512231738-516044/client.crt: no such file or directory
E0512 23:34:44.792827  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/skaffold-20220512231237-516044/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default
E0512 23:35:04.191821  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/false-20220512231715-516044/client.crt: no such file or directory
E0512 23:35:10.937800  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/functional-20220512225541-516044/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.174798764s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/enable-default-cni/DNS (360.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20220512231813-516044 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-20220512231813-516044 --alsologtostderr -v=1: exit status 80 (1.725268381s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-20220512231813-516044 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0512 23:32:50.419811  966347 out.go:296] Setting OutFile to fd 1 ...
	I0512 23:32:50.419917  966347 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 23:32:50.419927  966347 out.go:309] Setting ErrFile to fd 2...
	I0512 23:32:50.419933  966347 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 23:32:50.420034  966347 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/bin
	I0512 23:32:50.420181  966347 out.go:303] Setting JSON to false
	I0512 23:32:50.420203  966347 mustload.go:65] Loading cluster: embed-certs-20220512231813-516044
	I0512 23:32:50.420525  966347 config.go:178] Loaded profile config "embed-certs-20220512231813-516044": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 23:32:50.420914  966347 cli_runner.go:164] Run: docker container inspect embed-certs-20220512231813-516044 --format={{.State.Status}}
	I0512 23:32:50.455353  966347 host.go:66] Checking if "embed-certs-20220512231813-516044" exists ...
	I0512 23:32:50.455647  966347 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 23:32:50.572498  966347 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:93 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:true NGoroutines:54 SystemTime:2022-05-12 23:32:50.485680644 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0512 23:32:50.573154  966347 pause.go:58] "namespaces" ="keys" ="(MISSING)"
	I0512 23:32:50.575974  966347 out.go:177] * Pausing node embed-certs-20220512231813-516044 ... 
	I0512 23:32:50.577283  966347 host.go:66] Checking if "embed-certs-20220512231813-516044" exists ...
	I0512 23:32:50.577616  966347 ssh_runner.go:195] Run: systemctl --version
	I0512 23:32:50.577674  966347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220512231813-516044
	I0512 23:32:50.624329  966347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49407 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/embed-certs-20220512231813-516044/id_rsa Username:docker}
	I0512 23:32:50.718304  966347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 23:32:50.729205  966347 pause.go:50] kubelet running: true
	I0512 23:32:50.729270  966347 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0512 23:32:50.885527  966347 retry.go:31] will retry after 276.165072ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0512 23:32:51.161968  966347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 23:32:51.172233  966347 pause.go:50] kubelet running: true
	I0512 23:32:51.172298  966347 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0512 23:32:51.358670  966347 retry.go:31] will retry after 540.190908ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0512 23:32:51.899018  966347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 23:32:51.932808  966347 pause.go:50] kubelet running: true
	I0512 23:32:51.932876  966347 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0512 23:32:52.067661  966347 out.go:177] 
	W0512 23:32:52.069261  966347 out.go:239] X Exiting due to GUEST_PAUSE: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	
	X Exiting due to GUEST_PAUSE: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	
	W0512 23:32:52.069292  966347 out.go:239] * 
	* 
	W0512 23:32:52.072521  966347 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0512 23:32:52.074393  966347 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:313: out/minikube-linux-amd64 pause -p embed-certs-20220512231813-516044 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220512231813-516044
helpers_test.go:235: (dbg) docker inspect embed-certs-20220512231813-516044:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "52dfd5b2f2abb7ed4bd3e9a159772a6400aeed0a12943d829930b5409bc2345c",
	        "Created": "2022-05-12T23:21:49.918323005Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 771187,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-12T23:22:47.672865442Z",
	            "FinishedAt": "2022-05-12T23:22:46.407249297Z"
	        },
	        "Image": "sha256:0c5d9f8f84652aecf60b51012e4dbd6b63610a21a4eff9bcda47c370186206c5",
	        "ResolvConfPath": "/var/lib/docker/containers/52dfd5b2f2abb7ed4bd3e9a159772a6400aeed0a12943d829930b5409bc2345c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/52dfd5b2f2abb7ed4bd3e9a159772a6400aeed0a12943d829930b5409bc2345c/hostname",
	        "HostsPath": "/var/lib/docker/containers/52dfd5b2f2abb7ed4bd3e9a159772a6400aeed0a12943d829930b5409bc2345c/hosts",
	        "LogPath": "/var/lib/docker/containers/52dfd5b2f2abb7ed4bd3e9a159772a6400aeed0a12943d829930b5409bc2345c/52dfd5b2f2abb7ed4bd3e9a159772a6400aeed0a12943d829930b5409bc2345c-json.log",
	        "Name": "/embed-certs-20220512231813-516044",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-20220512231813-516044:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220512231813-516044",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/65c2150b68b8e4801f989bf0b53a6d32d617c837b719163df8098f06ddf021ca-init/diff:/var/lib/docker/overlay2/ee42149b25e28b76859c4061b8e1a834d47aa37da264f16af56a871bc4d249db/diff:/var/lib/docker/overlay2/3a08ce2dbc417a00b46e55b35b8386c502b9863cda04d95f2f893823ecd7a494/diff:/var/lib/docker/overlay2/cda9560399987a3ee5f4cd2af8edc9da25932bb5258944308a15874c67cbc319/diff:/var/lib/docker/overlay2/dd36997b49a6def06e9dcfdba5f7ef14311dd1de32a9a00344c6fbd50b553096/diff:/var/lib/docker/overlay2/43d0ec81b54d9b54cded9447ec902ac225ef32b76bbf8fccb297c43987228a75/diff:/var/lib/docker/overlay2/9f402168f981cd0073442c86be65fb2d132e3a78ae03bb909ac095619edb2eb2/diff:/var/lib/docker/overlay2/28bdb0476cf6f9cd9f2a0dd3331dfd3b37522bd60b1a27bb2610dca8d8b635ea/diff:/var/lib/docker/overlay2/2a0efc3b0c7eb642b0bc0c536b3a4d17e9ac518f7aebec02e1ec05b3d428fb1f/diff:/var/lib/docker/overlay2/e0c81de4167668d18ebd9e7c18a58cc9f299fd35fb62a015b25d5a02ae58d4b5/diff:/var/lib/docker/overlay2/2a4672
624588450729b53e00431ae5907364bce3567f80f2ea728fb857326904/diff:/var/lib/docker/overlay2/0e97bfc89e61d3d62858040c785f3b284f33ac7018f4b4d33a3036c098c97e3e/diff:/var/lib/docker/overlay2/8a73a22b019c3a55efb1a43c8f75fc58d5ca41ce0e49a611f547d879b1ffda7b/diff:/var/lib/docker/overlay2/848fea1622c1b0d14632242da931bc2db1161dd5b385949342c2a2c11f51cf73/diff:/var/lib/docker/overlay2/662426b8cb54c68fc690e53b79ffdaf74b3933d049ac45ac519fe0ab9768c00f/diff:/var/lib/docker/overlay2/f6dff72be55abd7c1636a8499b17e3e9c2505335e260f6441887d32e06af996c/diff:/var/lib/docker/overlay2/1457b483d3d2b3d49d94df784f17c826976abf770d40da25d61dc4a56352f801/diff:/var/lib/docker/overlay2/80ca98bba440d041f7780aece93b713f26c9681123a38f3c217bdf2994333169/diff:/var/lib/docker/overlay2/a84cd323e14e9fe88691d66a20cc13256253fd5e9438e1a5022e264217fbc7fc/diff:/var/lib/docker/overlay2/d5d7afe5ecbe4e28e78af49b1a44fcfa61023292e194719f37a0b4ed8ca82d4d/diff:/var/lib/docker/overlay2/d1c6af58176488a61b42dbade1d4c12c7320e6076dbfb9fc854fc26d0f436679/diff:/var/lib/d
ocker/overlay2/8169f5daa2d7dd4fdcbbedcd091248fb688a46d06339f1aa324c98e3df6b5d26/diff:/var/lib/docker/overlay2/0c367bf0a6d0e5d2f91a251190951488497a3b393f33ab37c9f0babfe8c3d27c/diff:/var/lib/docker/overlay2/168a4f8c2f13b8906726264edcebcb3cbe39ed638fe32e9a7e86d757de805dfc/diff:/var/lib/docker/overlay2/02b5ef49e3dece0178b813849e23e13ac56cb2c7b86db153d97fb48938a08a9b/diff:/var/lib/docker/overlay2/c3f3206ec18f364a03b895874e2e4b5e5d41b88af889d7ab1075d05d3c1174d3/diff:/var/lib/docker/overlay2/a7d920f53ed56631d472da0b34690dc70ce9c229f4feb17079d824ed2ee615c1/diff:/var/lib/docker/overlay2/9c483ae36d1f9158f5d2295d700e238d3bf16a8e46b9ea56f075485f82c5e434/diff:/var/lib/docker/overlay2/fd0dffd16fb9881ef59350db54d0cb879e79865f92e3964915828a627495351c/diff:/var/lib/docker/overlay2/cbb9eb97bc9666f97a39e769ab1e2bc70b73aeae79d2ec92822441566e6a587a/diff:/var/lib/docker/overlay2/b2639cfc76a8b294bbc4e8ca1befbee89fb96f093a1a3612207575f286a83080/diff:/var/lib/docker/overlay2/7bcf83888007057f9e2957456442eb6bde9c8112b06bd957a538463482b
7efd9/diff:/var/lib/docker/overlay2/f983b625edec8c1a25d7703ed497666a8f3dafe6ff1ffcbd55c9dd22c6c4d21d/diff:/var/lib/docker/overlay2/6e81a73b1d45906ebc7f357663b451e1ad8e61dd2a40f7da53742dec9ea8cc56/diff:/var/lib/docker/overlay2/19b513eec8f0deed93713262477ab308f8919e10b6ea5b3a4dcc41bf1cff0825/diff:/var/lib/docker/overlay2/b9af518889b8c70b0e652ee87e07c15b2e4865af121883ed942f1170763560c4/diff:/var/lib/docker/overlay2/90a4f31f04635f43475897f90e5692b3ae5ee023a53e99fdbbf382d545dac17d/diff:/var/lib/docker/overlay2/834445e7db36584c983dc950c96c9da9e0404ca274925ad142d9c7ae3ce7661d/diff:/var/lib/docker/overlay2/19337e43fcad0841394f1284cbb0d8a67e541c2bfe557a1956357cdd76508daf/diff:/var/lib/docker/overlay2/2e54094fc1a751bb1ef3c5b293d1f9e345afa75cab14bf08ae7aa007409381c8/diff:/var/lib/docker/overlay2/709d91d3b444b7fe7ab0a34a6869392318c613c4f294ddfe0c7480222c7cb35a/diff:/var/lib/docker/overlay2/2aa3de43882a67af6abdf2af131a29c63efe1b2b4f07ec65150d80ad6a6d6574/diff:/var/lib/docker/overlay2/e6cee571b331f309878811c7521d4feb397411
90ac269e42c32e6afe955e94a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/65c2150b68b8e4801f989bf0b53a6d32d617c837b719163df8098f06ddf021ca/merged",
	                "UpperDir": "/var/lib/docker/overlay2/65c2150b68b8e4801f989bf0b53a6d32d617c837b719163df8098f06ddf021ca/diff",
	                "WorkDir": "/var/lib/docker/overlay2/65c2150b68b8e4801f989bf0b53a6d32d617c837b719163df8098f06ddf021ca/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220512231813-516044",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220512231813-516044/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220512231813-516044",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220512231813-516044",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220512231813-516044",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f26f67b6761c19fdb5f2988b1b80b00320455fb41d6ef17e1fa6248181215ce4",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49407"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49406"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49403"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49405"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49404"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f26f67b6761c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220512231813-516044": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "52dfd5b2f2ab",
	                        "embed-certs-20220512231813-516044"
	                    ],
	                    "NetworkID": "37f634322f538caf1039168a71743e1e12e3e151d83e9005d38357338f530821",
	                    "EndpointID": "991ab814fb4310fc0760f375e35794da03dab0fa496839c8e392c2c121b66cad",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220512231813-516044 -n embed-certs-20220512231813-516044
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20220512231813-516044 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-20220512231813-516044 logs -n 25: (1.316831864s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |                 Profile                  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p auto-20220512231715-516044                              | auto-20220512231715-516044               | jenkins | v1.25.2 | 12 May 22 23:25 UTC | 12 May 22 23:26 UTC |
	|         | --memory=2048                                              |                                          |         |         |                     |                     |
	|         | --alsologtostderr                                          |                                          |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                              |                                          |         |         |                     |                     |
	|         | --driver=docker                                            |                                          |         |         |                     |                     |
	|         | --container-runtime=docker                                 |                                          |         |         |                     |                     |
	| ssh     | -p auto-20220512231715-516044                              | auto-20220512231715-516044               | jenkins | v1.25.2 | 12 May 22 23:26 UTC | 12 May 22 23:26 UTC |
	|         | pgrep -a kubelet                                           |                                          |         |         |                     |                     |
	| start   | -p newest-cni-20220512232515-516044 --memory=2200          | newest-cni-20220512232515-516044         | jenkins | v1.25.2 | 12 May 22 23:26 UTC | 12 May 22 23:26 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                          |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                          |         |         |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                          |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                          |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=docker                |                                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.23.6-rc.0                          |                                          |         |         |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220512232515-516044         | jenkins | v1.25.2 | 12 May 22 23:26 UTC | 12 May 22 23:26 UTC |
	|         | newest-cni-20220512232515-516044                           |                                          |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                          |         |         |                     |                     |
	| pause   | -p                                                         | newest-cni-20220512232515-516044         | jenkins | v1.25.2 | 12 May 22 23:26 UTC | 12 May 22 23:26 UTC |
	|         | newest-cni-20220512232515-516044                           |                                          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                          |         |         |                     |                     |
	| unpause | -p                                                         | newest-cni-20220512232515-516044         | jenkins | v1.25.2 | 12 May 22 23:26 UTC | 12 May 22 23:26 UTC |
	|         | newest-cni-20220512232515-516044                           |                                          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                          |         |         |                     |                     |
	| delete  | -p auto-20220512231715-516044                              | auto-20220512231715-516044               | jenkins | v1.25.2 | 12 May 22 23:26 UTC | 12 May 22 23:26 UTC |
	| delete  | -p                                                         | newest-cni-20220512232515-516044         | jenkins | v1.25.2 | 12 May 22 23:26 UTC | 12 May 22 23:26 UTC |
	|         | newest-cni-20220512232515-516044                           |                                          |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220512232515-516044         | jenkins | v1.25.2 | 12 May 22 23:26 UTC | 12 May 22 23:26 UTC |
	|         | newest-cni-20220512232515-516044                           |                                          |         |         |                     |                     |
	| start   | -p                                                         | old-k8s-version-20220512231738-516044    | jenkins | v1.25.2 | 12 May 22 23:20 UTC | 12 May 22 23:26 UTC |
	|         | old-k8s-version-20220512231738-516044                      |                                          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                          |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                          |                                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                              |                                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                    |                                          |         |         |                     |                     |
	|         | --keep-context=false --driver=docker                       |                                          |         |         |                     |                     |
	|         |  --container-runtime=docker                                |                                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                               |                                          |         |         |                     |                     |
	| ssh     | -p                                                         | old-k8s-version-20220512231738-516044    | jenkins | v1.25.2 | 12 May 22 23:26 UTC | 12 May 22 23:26 UTC |
	|         | old-k8s-version-20220512231738-516044                      |                                          |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                          |         |         |                     |                     |
	| pause   | -p                                                         | old-k8s-version-20220512231738-516044    | jenkins | v1.25.2 | 12 May 22 23:26 UTC | 12 May 22 23:26 UTC |
	|         | old-k8s-version-20220512231738-516044                      |                                          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                          |         |         |                     |                     |
	| unpause | -p                                                         | old-k8s-version-20220512231738-516044    | jenkins | v1.25.2 | 12 May 22 23:27 UTC | 12 May 22 23:27 UTC |
	|         | old-k8s-version-20220512231738-516044                      |                                          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                          |         |         |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220512231738-516044    | jenkins | v1.25.2 | 12 May 22 23:27 UTC | 12 May 22 23:27 UTC |
	|         | old-k8s-version-20220512231738-516044                      |                                          |         |         |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220512231738-516044    | jenkins | v1.25.2 | 12 May 22 23:27 UTC | 12 May 22 23:27 UTC |
	|         | old-k8s-version-20220512231738-516044                      |                                          |         |         |                     |                     |
	| start   | -p false-20220512231715-516044                             | false-20220512231715-516044              | jenkins | v1.25.2 | 12 May 22 23:26 UTC | 12 May 22 23:27 UTC |
	|         | --memory=2048                                              |                                          |         |         |                     |                     |
	|         | --alsologtostderr                                          |                                          |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                              |                                          |         |         |                     |                     |
	|         | --cni=false --driver=docker                                |                                          |         |         |                     |                     |
	|         | --container-runtime=docker                                 |                                          |         |         |                     |                     |
	| ssh     | -p false-20220512231715-516044                             | false-20220512231715-516044              | jenkins | v1.25.2 | 12 May 22 23:27 UTC | 12 May 22 23:27 UTC |
	|         | pgrep -a kubelet                                           |                                          |         |         |                     |                     |
	| delete  | -p false-20220512231715-516044                             | false-20220512231715-516044              | jenkins | v1.25.2 | 12 May 22 23:27 UTC | 12 May 22 23:27 UTC |
	| start   | -p                                                         | cilium-20220512231715-516044             | jenkins | v1.25.2 | 12 May 22 23:26 UTC | 12 May 22 23:27 UTC |
	|         | cilium-20220512231715-516044                               |                                          |         |         |                     |                     |
	|         | --memory=2048                                              |                                          |         |         |                     |                     |
	|         | --alsologtostderr                                          |                                          |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                              |                                          |         |         |                     |                     |
	|         | --cni=cilium --driver=docker                               |                                          |         |         |                     |                     |
	|         | --container-runtime=docker                                 |                                          |         |         |                     |                     |
	| ssh     | -p                                                         | cilium-20220512231715-516044             | jenkins | v1.25.2 | 12 May 22 23:28 UTC | 12 May 22 23:28 UTC |
	|         | cilium-20220512231715-516044                               |                                          |         |         |                     |                     |
	|         | pgrep -a kubelet                                           |                                          |         |         |                     |                     |
	| delete  | -p                                                         | cilium-20220512231715-516044             | jenkins | v1.25.2 | 12 May 22 23:28 UTC | 12 May 22 23:28 UTC |
	|         | cilium-20220512231715-516044                               |                                          |         |         |                     |                     |
	| start   | -p                                                         | enable-default-cni-20220512231715-516044 | jenkins | v1.25.2 | 12 May 22 23:28 UTC | 12 May 22 23:29 UTC |
	|         | enable-default-cni-20220512231715-516044                   |                                          |         |         |                     |                     |
	|         | --memory=2048 --alsologtostderr                            |                                          |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                              |                                          |         |         |                     |                     |
	|         | --enable-default-cni=true                                  |                                          |         |         |                     |                     |
	|         | --driver=docker                                            |                                          |         |         |                     |                     |
	|         | --container-runtime=docker                                 |                                          |         |         |                     |                     |
	| ssh     | -p                                                         | enable-default-cni-20220512231715-516044 | jenkins | v1.25.2 | 12 May 22 23:29 UTC | 12 May 22 23:29 UTC |
	|         | enable-default-cni-20220512231715-516044                   |                                          |         |         |                     |                     |
	|         | pgrep -a kubelet                                           |                                          |         |         |                     |                     |
	| start   | -p                                                         | embed-certs-20220512231813-516044        | jenkins | v1.25.2 | 12 May 22 23:22 UTC | 12 May 22 23:32 UTC |
	|         | embed-certs-20220512231813-516044                          |                                          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                          |         |         |                     |                     |
	|         | --wait=true --embed-certs                                  |                                          |         |         |                     |                     |
	|         | --driver=docker                                            |                                          |         |         |                     |                     |
	|         | --container-runtime=docker                                 |                                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.23.5                               |                                          |         |         |                     |                     |
	| ssh     | -p                                                         | embed-certs-20220512231813-516044        | jenkins | v1.25.2 | 12 May 22 23:32 UTC | 12 May 22 23:32 UTC |
	|         | embed-certs-20220512231813-516044                          |                                          |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                          |         |         |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/12 23:28:18
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.18.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0512 23:28:18.580360  843786 out.go:296] Setting OutFile to fd 1 ...
	I0512 23:28:18.580526  843786 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 23:28:18.580545  843786 out.go:309] Setting ErrFile to fd 2...
	I0512 23:28:18.580552  843786 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 23:28:18.580723  843786 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/bin
	I0512 23:28:18.581663  843786 out.go:303] Setting JSON to false
	I0512 23:28:18.584271  843786 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":22255,"bootTime":1652375844,"procs":1224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0512 23:28:18.584355  843786 start.go:125] virtualization: kvm guest
	I0512 23:28:18.586886  843786 out.go:177] * [enable-default-cni-20220512231715-516044] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0512 23:28:18.588703  843786 out.go:177]   - MINIKUBE_LOCATION=12739
	I0512 23:28:18.588680  843786 notify.go:193] Checking for updates...
	I0512 23:28:18.591585  843786 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0512 23:28:18.592981  843786 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/kubeconfig
	I0512 23:28:18.594262  843786 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube
	I0512 23:28:18.595557  843786 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0512 23:28:18.597324  843786 config.go:178] Loaded profile config "calico-20220512231715-516044": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 23:28:18.597455  843786 config.go:178] Loaded profile config "custom-weave-20220512231715-516044": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 23:28:18.597573  843786 config.go:178] Loaded profile config "embed-certs-20220512231813-516044": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 23:28:18.597648  843786 driver.go:358] Setting default libvirt URI to qemu:///system
	I0512 23:28:18.649839  843786 docker.go:137] docker version: linux-20.10.16
	I0512 23:28:18.649941  843786 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 23:28:18.814320  843786 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:93 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:49 SystemTime:2022-05-12 23:28:18.702239405 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0512 23:28:18.814428  843786 docker.go:254] overlay module found
	I0512 23:28:18.816727  843786 out.go:177] * Using the docker driver based on user configuration
	I0512 23:28:18.818128  843786 start.go:284] selected driver: docker
	I0512 23:28:18.818157  843786 start.go:806] validating driver "docker" against <nil>
	I0512 23:28:18.818183  843786 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0512 23:28:18.819403  843786 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 23:28:18.967412  843786 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:93 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:49 SystemTime:2022-05-12 23:28:18.859994416 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0512 23:28:18.967562  843786 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	E0512 23:28:18.967741  843786 start_flags.go:444] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0512 23:28:18.967763  843786 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0512 23:28:18.969521  843786 out.go:177] * Using Docker driver with the root privilege
	I0512 23:28:18.970712  843786 cni.go:95] Creating CNI manager for "bridge"
	I0512 23:28:18.970731  843786 start_flags.go:301] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0512 23:28:18.970742  843786 start_flags.go:306] config:
	{Name:enable-default-cni-20220512231715-516044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:enable-default-cni-20220512231715-516044 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0512 23:28:18.972192  843786 out.go:177] * Starting control plane node enable-default-cni-20220512231715-516044 in cluster enable-default-cni-20220512231715-516044
	I0512 23:28:18.973764  843786 cache.go:120] Beginning downloading kic base image for docker with docker
	I0512 23:28:18.975281  843786 out.go:177] * Pulling base image ...
	I0512 23:28:18.976708  843786 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 23:28:18.976761  843786 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0512 23:28:18.976776  843786 cache.go:57] Caching tarball of preloaded images
	I0512 23:28:18.976797  843786 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0512 23:28:18.977001  843786 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0512 23:28:18.977025  843786 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0512 23:28:18.977191  843786 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/config.json ...
	I0512 23:28:18.977229  843786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/config.json: {Name:mk3aae760a0be104b53421a4cae9bbbe3e51b18d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:28:19.034575  843786 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon, skipping pull
	I0512 23:28:19.034616  843786 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in daemon, skipping load
	I0512 23:28:19.034635  843786 cache.go:206] Successfully downloaded all kic artifacts
	I0512 23:28:19.034684  843786 start.go:352] acquiring machines lock for enable-default-cni-20220512231715-516044: {Name:mk38f3d3df3d3ca8fbfac4fc046f4ebec5ba2ed4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0512 23:28:19.034818  843786 start.go:356] acquired machines lock for "enable-default-cni-20220512231715-516044" in 107.959µs
	I0512 23:28:19.034854  843786 start.go:91] Provisioning new machine with config: &{Name:enable-default-cni-20220512231715-516044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:enable-default-cni-202205122317
15-516044 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0512 23:28:19.034987  843786 start.go:131] createHost starting for "" (driver="docker")
	I0512 23:28:18.118255  826131 addons.go:417] enableAddons completed in 801.104581ms
	I0512 23:28:19.527104  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:17.469878  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:19.988219  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:17.397660  770898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:17.898316  770898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:18.397901  770898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:18.897673  770898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:19.397716  770898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:19.898453  770898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:20.398381  770898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:20.898327  770898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:21.398104  770898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:21.712059  770898 kubeadm.go:1020] duration metric: took 13.642388064s to wait for elevateKubeSystemPrivileges.
	I0512 23:28:21.712101  770898 kubeadm.go:393] StartCluster complete in 5m28.329455174s
	I0512 23:28:21.712124  770898 settings.go:142] acquiring lock: {Name:mkfe717360cf8b2fa45465ab4bd68ece68561c89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:28:21.712255  770898 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/kubeconfig
	I0512 23:28:21.713784  770898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/kubeconfig: {Name:mk0f3828db53b6683822ca2fe8148b87d561cdb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:28:22.397601  770898 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220512231813-516044" rescaled to 1
	I0512 23:28:22.397663  770898 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0512 23:28:22.400985  770898 out.go:177] * Verifying Kubernetes components...
	I0512 23:28:22.397715  770898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0512 23:28:22.397713  770898 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0512 23:28:22.397949  770898 config.go:178] Loaded profile config "embed-certs-20220512231813-516044": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 23:28:22.402684  770898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 23:28:22.402725  770898 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220512231813-516044"
	I0512 23:28:22.402749  770898 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220512231813-516044"
	I0512 23:28:22.402749  770898 addons.go:65] Setting dashboard=true in profile "embed-certs-20220512231813-516044"
	I0512 23:28:22.402757  770898 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220512231813-516044"
	I0512 23:28:22.402771  770898 addons.go:153] Setting addon dashboard=true in "embed-certs-20220512231813-516044"
	I0512 23:28:22.402770  770898 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220512231813-516044"
	W0512 23:28:22.402773  770898 addons.go:165] addon storage-provisioner should already be in state true
	W0512 23:28:22.402779  770898 addons.go:165] addon dashboard should already be in state true
	W0512 23:28:22.402780  770898 addons.go:165] addon metrics-server should already be in state true
	I0512 23:28:22.402822  770898 host.go:66] Checking if "embed-certs-20220512231813-516044" exists ...
	I0512 23:28:22.402822  770898 host.go:66] Checking if "embed-certs-20220512231813-516044" exists ...
	I0512 23:28:22.402822  770898 host.go:66] Checking if "embed-certs-20220512231813-516044" exists ...
	I0512 23:28:22.402735  770898 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220512231813-516044"
	I0512 23:28:22.403182  770898 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220512231813-516044"
	I0512 23:28:22.403336  770898 cli_runner.go:164] Run: docker container inspect embed-certs-20220512231813-516044 --format={{.State.Status}}
	I0512 23:28:22.403337  770898 cli_runner.go:164] Run: docker container inspect embed-certs-20220512231813-516044 --format={{.State.Status}}
	I0512 23:28:22.403423  770898 cli_runner.go:164] Run: docker container inspect embed-certs-20220512231813-516044 --format={{.State.Status}}
	I0512 23:28:22.403453  770898 cli_runner.go:164] Run: docker container inspect embed-certs-20220512231813-516044 --format={{.State.Status}}
	I0512 23:28:22.472938  770898 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220512231813-516044"
	W0512 23:28:22.472966  770898 addons.go:165] addon default-storageclass should already be in state true
	I0512 23:28:22.472998  770898 host.go:66] Checking if "embed-certs-20220512231813-516044" exists ...
	I0512 23:28:22.473631  770898 cli_runner.go:164] Run: docker container inspect embed-certs-20220512231813-516044 --format={{.State.Status}}
	I0512 23:28:22.478472  770898 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0512 23:28:22.485295  770898 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0512 23:28:22.487807  770898 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0512 23:28:22.487831  770898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0512 23:28:22.487900  770898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220512231813-516044
	I0512 23:28:22.486659  770898 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0512 23:28:22.491978  770898 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0512 23:28:22.492004  770898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0512 23:28:22.492061  770898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220512231813-516044
	I0512 23:28:22.569275  770898 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0512 23:28:19.037770  843786 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0512 23:28:19.038064  843786 start.go:165] libmachine.API.Create for "enable-default-cni-20220512231715-516044" (driver="docker")
	I0512 23:28:19.038114  843786 client.go:168] LocalClient.Create starting
	I0512 23:28:19.038208  843786 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca.pem
	I0512 23:28:19.038252  843786 main.go:134] libmachine: Decoding PEM data...
	I0512 23:28:19.038277  843786 main.go:134] libmachine: Parsing certificate...
	I0512 23:28:19.038348  843786 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/cert.pem
	I0512 23:28:19.038379  843786 main.go:134] libmachine: Decoding PEM data...
	I0512 23:28:19.038399  843786 main.go:134] libmachine: Parsing certificate...
	I0512 23:28:19.038856  843786 cli_runner.go:164] Run: docker network inspect enable-default-cni-20220512231715-516044 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0512 23:28:19.071828  843786 cli_runner.go:211] docker network inspect enable-default-cni-20220512231715-516044 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0512 23:28:19.071906  843786 network_create.go:272] running [docker network inspect enable-default-cni-20220512231715-516044] to gather additional debugging logs...
	I0512 23:28:19.071929  843786 cli_runner.go:164] Run: docker network inspect enable-default-cni-20220512231715-516044
	W0512 23:28:19.108247  843786 cli_runner.go:211] docker network inspect enable-default-cni-20220512231715-516044 returned with exit code 1
	I0512 23:28:19.108294  843786 network_create.go:275] error running [docker network inspect enable-default-cni-20220512231715-516044]: docker network inspect enable-default-cni-20220512231715-516044: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: enable-default-cni-20220512231715-516044
	I0512 23:28:19.108313  843786 network_create.go:277] output of [docker network inspect enable-default-cni-20220512231715-516044]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: enable-default-cni-20220512231715-516044
	
	** /stderr **
	I0512 23:28:19.108370  843786 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0512 23:28:19.149543  843786 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-43829243746f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:13:6e:f4:7c}}
	I0512 23:28:19.150200  843786 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc0006c4618] misses:0}
	I0512 23:28:19.150235  843786 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 23:28:19.150251  843786 network_create.go:115] attempt to create docker network enable-default-cni-20220512231715-516044 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0512 23:28:19.150296  843786 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220512231715-516044
	I0512 23:28:19.231144  843786 network_create.go:99] docker network enable-default-cni-20220512231715-516044 192.168.58.0/24 created
	I0512 23:28:19.231187  843786 kic.go:106] calculated static IP "192.168.58.2" for the "enable-default-cni-20220512231715-516044" container
	I0512 23:28:19.231253  843786 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0512 23:28:19.267949  843786 cli_runner.go:164] Run: docker volume create enable-default-cni-20220512231715-516044 --label name.minikube.sigs.k8s.io=enable-default-cni-20220512231715-516044 --label created_by.minikube.sigs.k8s.io=true
	I0512 23:28:19.306316  843786 oci.go:103] Successfully created a docker volume enable-default-cni-20220512231715-516044
	I0512 23:28:19.306410  843786 cli_runner.go:164] Run: docker run --rm --name enable-default-cni-20220512231715-516044-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20220512231715-516044 --entrypoint /usr/bin/test -v enable-default-cni-20220512231715-516044:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c -d /var/lib
	I0512 23:28:20.114288  843786 oci.go:107] Successfully prepared a docker volume enable-default-cni-20220512231715-516044
	I0512 23:28:20.114358  843786 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 23:28:20.114384  843786 kic.go:179] Starting extracting preloaded images to volume ...
	I0512 23:28:20.114451  843786 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-20220512231715-516044:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c -I lz4 -xf /preloaded.tar -C /extractDir
	I0512 23:28:22.027467  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:24.525738  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:22.470707  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:24.968786  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:22.541272  770898 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0512 23:28:22.555910  770898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49407 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/embed-certs-20220512231813-516044/id_rsa Username:docker}
	I0512 23:28:22.563731  770898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49407 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/embed-certs-20220512231813-516044/id_rsa Username:docker}
	I0512 23:28:22.580243  770898 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0512 23:28:22.580274  770898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0512 23:28:22.580340  770898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220512231813-516044
	I0512 23:28:22.580380  770898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0512 23:28:22.580442  770898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220512231813-516044
	I0512 23:28:22.610308  770898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0512 23:28:22.610544  770898 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220512231813-516044" to be "Ready" ...
	I0512 23:28:22.621639  770898 node_ready.go:49] node "embed-certs-20220512231813-516044" has status "Ready":"True"
	I0512 23:28:22.621663  770898 node_ready.go:38] duration metric: took 11.097061ms waiting for node "embed-certs-20220512231813-516044" to be "Ready" ...
	I0512 23:28:22.621676  770898 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0512 23:28:22.629255  770898 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-4bhs4" in "kube-system" namespace to be "Ready" ...
	I0512 23:28:22.651122  770898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49407 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/embed-certs-20220512231813-516044/id_rsa Username:docker}
	I0512 23:28:22.667637  770898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49407 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/embed-certs-20220512231813-516044/id_rsa Username:docker}
	I0512 23:28:22.724293  770898 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0512 23:28:22.724332  770898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0512 23:28:22.809892  770898 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0512 23:28:22.809933  770898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0512 23:28:22.879147  770898 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0512 23:28:22.879178  770898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0512 23:28:22.886158  770898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0512 23:28:23.017996  770898 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0512 23:28:23.018026  770898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0512 23:28:23.018821  770898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0512 23:28:23.019237  770898 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0512 23:28:23.019254  770898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0512 23:28:23.073879  770898 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0512 23:28:23.073967  770898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0512 23:28:23.088176  770898 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0512 23:28:23.088209  770898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0512 23:28:23.175612  770898 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0512 23:28:23.175706  770898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0512 23:28:23.184329  770898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0512 23:28:23.279226  770898 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0512 23:28:23.279263  770898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0512 23:28:23.574749  770898 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0512 23:28:23.574798  770898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0512 23:28:23.701198  770898 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0512 23:28:23.701229  770898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0512 23:28:23.738065  770898 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0512 23:28:23.738097  770898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0512 23:28:23.782715  770898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0512 23:28:24.794225  770898 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.183871971s)
	I0512 23:28:24.794260  770898 start.go:815] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0512 23:28:24.796661  770898 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.910458295s)
	I0512 23:28:24.854445  770898 pod_ready.go:102] pod "coredns-64897985d-4bhs4" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:25.676617  770898 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.657751735s)
	I0512 23:28:25.810431  770898 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.626005558s)
	I0512 23:28:25.810589  770898 addons.go:386] Verifying addon metrics-server=true in "embed-certs-20220512231813-516044"
	I0512 23:28:26.036418  770898 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.2535963s)
	I0512 23:28:26.039945  770898 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0512 23:28:26.042306  770898 addons.go:417] enableAddons completed in 3.644590537s
	I0512 23:28:25.770407  843786 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-20220512231715-516044:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c -I lz4 -xf /preloaded.tar -C /extractDir: (5.655852092s)
	I0512 23:28:25.770451  843786 kic.go:188] duration metric: took 5.656061 seconds to extract preloaded images to volume
	W0512 23:28:25.770606  843786 cgroups_linux.go:88] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0512 23:28:25.770730  843786 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0512 23:28:25.931643  843786 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-20220512231715-516044 --name enable-default-cni-20220512231715-516044 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20220512231715-516044 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-20220512231715-516044 --network enable-default-cni-20220512231715-516044 --ip 192.168.58.2 --volume enable-default-cni-20220512231715-516044:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c
	I0512 23:28:26.725335  843786 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220512231715-516044 --format={{.State.Running}}
	I0512 23:28:26.773175  843786 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220512231715-516044 --format={{.State.Status}}
	I0512 23:28:26.827246  843786 cli_runner.go:164] Run: docker exec enable-default-cni-20220512231715-516044 stat /var/lib/dpkg/alternatives/iptables
	I0512 23:28:26.925200  843786 oci.go:144] the created container "enable-default-cni-20220512231715-516044" has a running status.
	I0512 23:28:26.925231  843786 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/enable-default-cni-20220512231715-516044/id_rsa...
	I0512 23:28:27.249017  843786 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/enable-default-cni-20220512231715-516044/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0512 23:28:27.341205  843786 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220512231715-516044 --format={{.State.Status}}
	I0512 23:28:27.380312  843786 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0512 23:28:27.380338  843786 kic_runner.go:114] Args: [docker exec --privileged enable-default-cni-20220512231715-516044 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0512 23:28:27.474148  843786 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220512231715-516044 --format={{.State.Status}}
	I0512 23:28:27.523779  843786 machine.go:88] provisioning docker machine ...
	I0512 23:28:27.523822  843786 ubuntu.go:169] provisioning hostname "enable-default-cni-20220512231715-516044"
	I0512 23:28:27.523880  843786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220512231715-516044
	I0512 23:28:27.579593  843786 main.go:134] libmachine: Using SSH client type: native
	I0512 23:28:27.579834  843786 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49447 <nil> <nil>}
	I0512 23:28:27.579863  843786 main.go:134] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-20220512231715-516044 && echo "enable-default-cni-20220512231715-516044" | sudo tee /etc/hostname
	I0512 23:28:27.738608  843786 main.go:134] libmachine: SSH cmd err, output: <nil>: enable-default-cni-20220512231715-516044
	
	I0512 23:28:27.738698  843786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220512231715-516044
	I0512 23:28:27.780020  843786 main.go:134] libmachine: Using SSH client type: native
	I0512 23:28:27.780264  843786 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49447 <nil> <nil>}
	I0512 23:28:27.780308  843786 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-20220512231715-516044' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-20220512231715-516044/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-20220512231715-516044' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0512 23:28:27.930029  843786 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0512 23:28:27.930070  843786 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/key.pem
ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube}
	I0512 23:28:27.930132  843786 ubuntu.go:177] setting up certificates
	I0512 23:28:27.930150  843786 provision.go:83] configureAuth start
	I0512 23:28:27.930215  843786 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-20220512231715-516044
	I0512 23:28:27.964762  843786 provision.go:138] copyHostCerts
	I0512 23:28:27.964849  843786 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/key.pem, removing ...
	I0512 23:28:27.964866  843786 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/key.pem
	I0512 23:28:27.964939  843786 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/key.pem (1675 bytes)
	I0512 23:28:27.965045  843786 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/ca.pem, removing ...
	I0512 23:28:27.965061  843786 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/ca.pem
	I0512 23:28:27.965121  843786 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/ca.pem (1078 bytes)
	I0512 23:28:27.965191  843786 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cert.pem, removing ...
	I0512 23:28:27.965201  843786 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cert.pem
	I0512 23:28:27.965226  843786 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cert.pem (1123 bytes)
	I0512 23:28:27.965278  843786 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-20220512231715-516044 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube enable-default-cni-20220512231715-516044]
	I0512 23:28:28.090353  843786 provision.go:172] copyRemoteCerts
	I0512 23:28:28.090425  843786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0512 23:28:28.090480  843786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220512231715-516044
	I0512 23:28:28.134647  843786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49447 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/enable-default-cni-20220512231715-516044/id_rsa Username:docker}
	I0512 23:28:28.239924  843786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0512 23:28:28.259486  843786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/server.pem --> /etc/docker/server.pem (1289 bytes)
	I0512 23:28:28.280041  843786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0512 23:28:28.305083  843786 provision.go:86] duration metric: configureAuth took 374.912037ms
	I0512 23:28:28.305160  843786 ubuntu.go:193] setting minikube options for container-runtime
	I0512 23:28:28.305382  843786 config.go:178] Loaded profile config "enable-default-cni-20220512231715-516044": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 23:28:28.305464  843786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220512231715-516044
	I0512 23:28:28.355363  843786 main.go:134] libmachine: Using SSH client type: native
	I0512 23:28:28.355558  843786 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49447 <nil> <nil>}
	I0512 23:28:28.355587  843786 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0512 23:28:28.492540  843786 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0512 23:28:28.492574  843786 ubuntu.go:71] root file system type: overlay
	I0512 23:28:28.492866  843786 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0512 23:28:28.492949  843786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220512231715-516044
	I0512 23:28:28.537790  843786 main.go:134] libmachine: Using SSH client type: native
	I0512 23:28:28.537982  843786 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49447 <nil> <nil>}
	I0512 23:28:28.538085  843786 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0512 23:28:26.529202  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:29.027012  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:26.969151  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:28.978404  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:28.696741  843786 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0512 23:28:28.696845  843786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220512231715-516044
	I0512 23:28:28.742175  843786 main.go:134] libmachine: Using SSH client type: native
	I0512 23:28:28.742364  843786 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49447 <nil> <nil>}
	I0512 23:28:28.742395  843786 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0512 23:28:29.606843  843786 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-05 13:17:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-12 23:28:28.691543147 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0512 23:28:29.606891  843786 machine.go:91] provisioned docker machine in 2.083082921s
	I0512 23:28:29.606904  843786 client.go:171] LocalClient.Create took 10.568776373s
	I0512 23:28:29.606916  843786 start.go:173] duration metric: libmachine.API.Create for "enable-default-cni-20220512231715-516044" took 10.568855897s
	I0512 23:28:29.606939  843786 start.go:306] post-start starting for "enable-default-cni-20220512231715-516044" (driver="docker")
	I0512 23:28:29.606947  843786 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0512 23:28:29.607018  843786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0512 23:28:29.607072  843786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220512231715-516044
	I0512 23:28:29.652464  843786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49447 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/enable-default-cni-20220512231715-516044/id_rsa Username:docker}
	I0512 23:28:29.751136  843786 ssh_runner.go:195] Run: cat /etc/os-release
	I0512 23:28:29.753905  843786 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0512 23:28:29.753929  843786 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0512 23:28:29.753938  843786 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0512 23:28:29.753943  843786 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0512 23:28:29.753953  843786 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/addons for local assets ...
	I0512 23:28:29.754001  843786 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/files for local assets ...
	I0512 23:28:29.754083  843786 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/files/etc/ssl/certs/5160442.pem -> 5160442.pem in /etc/ssl/certs
	I0512 23:28:29.754167  843786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0512 23:28:29.761146  843786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/files/etc/ssl/certs/5160442.pem --> /etc/ssl/certs/5160442.pem (1708 bytes)
	I0512 23:28:29.782276  843786 start.go:309] post-start completed in 175.321598ms
	I0512 23:28:29.782686  843786 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-20220512231715-516044
	I0512 23:28:29.829908  843786 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/config.json ...
	I0512 23:28:29.830487  843786 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0512 23:28:29.830552  843786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220512231715-516044
	I0512 23:28:29.875938  843786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49447 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/enable-default-cni-20220512231715-516044/id_rsa Username:docker}
	I0512 23:28:29.969741  843786 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0512 23:28:29.975669  843786 start.go:134] duration metric: createHost completed in 10.94066329s
	I0512 23:28:29.975697  843786 start.go:81] releasing machines lock for "enable-default-cni-20220512231715-516044", held for 10.940860777s
	I0512 23:28:29.975797  843786 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-20220512231715-516044
	I0512 23:28:30.027088  843786 ssh_runner.go:195] Run: systemctl --version
	I0512 23:28:30.027135  843786 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0512 23:28:30.027178  843786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220512231715-516044
	I0512 23:28:30.027200  843786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220512231715-516044
	I0512 23:28:30.070104  843786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49447 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/enable-default-cni-20220512231715-516044/id_rsa Username:docker}
	I0512 23:28:30.072460  843786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49447 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/enable-default-cni-20220512231715-516044/id_rsa Username:docker}
	I0512 23:28:30.193691  843786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0512 23:28:30.205546  843786 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 23:28:30.219357  843786 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0512 23:28:30.219426  843786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0512 23:28:30.232948  843786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0512 23:28:30.250418  843786 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0512 23:28:30.368887  843786 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0512 23:28:30.475719  843786 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 23:28:30.486865  843786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0512 23:28:30.572570  843786 ssh_runner.go:195] Run: sudo systemctl start docker
	I0512 23:28:30.583832  843786 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 23:28:30.626733  843786 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 23:28:27.143702  770898 pod_ready.go:102] pod "coredns-64897985d-4bhs4" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:27.643200  770898 pod_ready.go:92] pod "coredns-64897985d-4bhs4" in "kube-system" namespace has status "Ready":"True"
	I0512 23:28:27.643235  770898 pod_ready.go:81] duration metric: took 5.013909167s waiting for pod "coredns-64897985d-4bhs4" in "kube-system" namespace to be "Ready" ...
	I0512 23:28:27.643248  770898 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-zcth8" in "kube-system" namespace to be "Ready" ...
	I0512 23:28:29.655315  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:30.674157  843786 out.go:204] * Preparing Kubernetes v1.23.5 on Docker 20.10.15 ...
	I0512 23:28:30.674357  843786 cli_runner.go:164] Run: docker network inspect enable-default-cni-20220512231715-516044 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0512 23:28:30.722986  843786 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0512 23:28:30.727525  843786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0512 23:28:30.740190  843786 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 23:28:30.740283  843786 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0512 23:28:30.779012  843786 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0512 23:28:30.779041  843786 docker.go:541] Images already preloaded, skipping extraction
	I0512 23:28:30.779102  843786 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0512 23:28:30.821535  843786 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0512 23:28:30.821565  843786 cache_images.go:84] Images are preloaded, skipping loading
	I0512 23:28:30.821631  843786 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0512 23:28:30.925253  843786 cni.go:95] Creating CNI manager for "bridge"
	I0512 23:28:30.925290  843786 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0512 23:28:30.925308  843786 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-20220512231715-516044 NodeName:enable-default-cni-20220512231715-516044 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgrou
pfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0512 23:28:30.925534  843786 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "enable-default-cni-20220512231715-516044"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0512 23:28:30.925652  843786 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=enable-default-cni-20220512231715-516044 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:enable-default-cni-20220512231715-516044 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:}
	I0512 23:28:30.925728  843786 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0512 23:28:30.935483  843786 binaries.go:44] Found k8s binaries, skipping transfer
	I0512 23:28:30.935551  843786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0512 23:28:30.945502  843786 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I0512 23:28:30.959341  843786 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0512 23:28:30.972297  843786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2062 bytes)
	I0512 23:28:30.991361  843786 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0512 23:28:30.995409  843786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0512 23:28:31.009652  843786 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044 for IP: 192.168.58.2
	I0512 23:28:31.009772  843786 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/ca.key
	I0512 23:28:31.009822  843786 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/proxy-client-ca.key
	I0512 23:28:31.009889  843786 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/client.key
	I0512 23:28:31.009913  843786 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/client.crt with IP's: []
	I0512 23:28:31.463677  843786 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/client.crt ...
	I0512 23:28:31.463711  843786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/client.crt: {Name:mk7c09ec5a15390e46415471786b452a5023b626 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:28:31.463892  843786 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/client.key ...
	I0512 23:28:31.463909  843786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/client.key: {Name:mk374bf671fb9f3d39da9a04e035a0ef9d918f3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:28:31.464013  843786 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/apiserver.key.cee25041
	I0512 23:28:31.464029  843786 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0512 23:28:31.746329  843786 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/apiserver.crt.cee25041 ...
	I0512 23:28:31.746363  843786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/apiserver.crt.cee25041: {Name:mkb8a90dd598aec292eb807d878fef881dfb8fc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:28:31.746530  843786 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/apiserver.key.cee25041 ...
	I0512 23:28:31.746542  843786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/apiserver.key.cee25041: {Name:mk79c449876eeaaed674480c7108a6023b88f67a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:28:31.746622  843786 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/apiserver.crt
	I0512 23:28:31.746677  843786 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/apiserver.key
	I0512 23:28:31.746715  843786 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/proxy-client.key
	I0512 23:28:31.746739  843786 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/proxy-client.crt with IP's: []
	I0512 23:28:31.911624  843786 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/proxy-client.crt ...
	I0512 23:28:31.911658  843786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/proxy-client.crt: {Name:mk7685dd2a9f3e2f18f0122affc2e2e2bc85cc05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:28:31.911843  843786 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/proxy-client.key ...
	I0512 23:28:31.911860  843786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/proxy-client.key: {Name:mk9f53a886e79d975a676e06c92bd7b1a4f07e69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:28:31.912054  843786 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/516044.pem (1338 bytes)
	W0512 23:28:31.912107  843786 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/516044_empty.pem, impossibly tiny 0 bytes
	I0512 23:28:31.912127  843786 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca-key.pem (1679 bytes)
	I0512 23:28:31.912165  843786 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca.pem (1078 bytes)
	I0512 23:28:31.912205  843786 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/cert.pem (1123 bytes)
	I0512 23:28:31.912244  843786 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/key.pem (1675 bytes)
	I0512 23:28:31.912307  843786 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/files/etc/ssl/certs/5160442.pem (1708 bytes)
	I0512 23:28:31.912889  843786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0512 23:28:31.933597  843786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0512 23:28:31.954672  843786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0512 23:28:31.972715  843786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0512 23:28:31.993120  843786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0512 23:28:32.012919  843786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0512 23:28:32.035330  843786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0512 23:28:32.052534  843786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0512 23:28:32.070090  843786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0512 23:28:32.090729  843786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/516044.pem --> /usr/share/ca-certificates/516044.pem (1338 bytes)
	I0512 23:28:32.110781  843786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/files/etc/ssl/certs/5160442.pem --> /usr/share/ca-certificates/5160442.pem (1708 bytes)
	I0512 23:28:32.130616  843786 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0512 23:28:32.143950  843786 ssh_runner.go:195] Run: openssl version
	I0512 23:28:32.149501  843786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5160442.pem && ln -fs /usr/share/ca-certificates/5160442.pem /etc/ssl/certs/5160442.pem"
	I0512 23:28:32.158189  843786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5160442.pem
	I0512 23:28:32.161273  843786 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 12 22:55 /usr/share/ca-certificates/5160442.pem
	I0512 23:28:32.161325  843786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5160442.pem
	I0512 23:28:32.166749  843786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5160442.pem /etc/ssl/certs/3ec20f2e.0"
	I0512 23:28:32.174140  843786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0512 23:28:32.181587  843786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0512 23:28:32.184481  843786 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 12 22:51 /usr/share/ca-certificates/minikubeCA.pem
	I0512 23:28:32.184528  843786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0512 23:28:32.189667  843786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0512 23:28:32.197872  843786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516044.pem && ln -fs /usr/share/ca-certificates/516044.pem /etc/ssl/certs/516044.pem"
	I0512 23:28:32.205247  843786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516044.pem
	I0512 23:28:32.208252  843786 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 12 22:55 /usr/share/ca-certificates/516044.pem
	I0512 23:28:32.208295  843786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516044.pem
	I0512 23:28:32.212943  843786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516044.pem /etc/ssl/certs/51391683.0"
	I0512 23:28:32.220516  843786 kubeadm.go:391] StartCluster: {Name:enable-default-cni-20220512231715-516044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:enable-default-cni-20220512231715-516044 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false}
	I0512 23:28:32.220667  843786 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0512 23:28:32.256057  843786 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0512 23:28:32.264457  843786 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0512 23:28:32.274539  843786 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0512 23:28:32.274597  843786 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0512 23:28:32.282435  843786 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0512 23:28:32.282484  843786 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0512 23:28:32.827968  843786 out.go:204]   - Generating certificates and keys ...
	I0512 23:28:31.526546  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:34.026340  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:30.980533  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:33.476217  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:32.154642  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:34.176722  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:36.654057  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:35.419549  843786 out.go:204]   - Booting up control plane ...
	I0512 23:28:36.027893  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:38.526607  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:35.477295  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:37.480114  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:39.968161  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:39.156610  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:41.655468  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:43.463971  843786 out.go:204]   - Configuring RBAC rules ...
	I0512 23:28:43.879904  843786 cni.go:95] Creating CNI manager for "bridge"
	I0512 23:28:43.881636  843786 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0512 23:28:41.026112  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:43.529445  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:42.476255  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:44.478289  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:44.154501  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:46.653952  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:43.883104  843786 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0512 23:28:43.892269  843786 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0512 23:28:43.910251  843786 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0512 23:28:43.910387  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:43.910493  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=5812f8ec06db4997111dc3269784a7f664662f05 minikube.k8s.io/name=enable-default-cni-20220512231715-516044 minikube.k8s.io/updated_at=2022_05_12T23_28_43_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:44.393174  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:44.393247  843786 ops.go:34] apiserver oom_adj: -16
	I0512 23:28:44.965798  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:45.466023  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:45.965204  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:46.465835  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:46.965218  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:47.465889  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:47.965258  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:48.465441  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:46.024526  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:48.025394  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:46.968381  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:48.978100  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:49.154580  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:51.655662  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:48.966225  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:49.465797  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:49.966008  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:50.466189  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:50.965705  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:51.465267  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:51.965223  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:52.465267  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:52.966129  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:53.465244  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:50.026193  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:52.026387  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:54.026539  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:50.978316  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:52.978356  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:54.153898  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:56.153995  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:53.965637  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:54.465227  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:54.965220  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:55.465736  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:55.965210  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:56.465273  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:56.965957  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:57.097629  843786 kubeadm.go:1020] duration metric: took 13.187281093s to wait for elevateKubeSystemPrivileges.
	I0512 23:28:57.097674  843786 kubeadm.go:393] StartCluster complete in 24.877166703s
	I0512 23:28:57.097714  843786 settings.go:142] acquiring lock: {Name:mkfe717360cf8b2fa45465ab4bd68ece68561c89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:28:57.097876  843786 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/kubeconfig
	I0512 23:28:57.100783  843786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/kubeconfig: {Name:mk0f3828db53b6683822ca2fe8148b87d561cdb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:28:57.621636  843786 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "enable-default-cni-20220512231715-516044" rescaled to 1
	I0512 23:28:57.621768  843786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0512 23:28:57.621776  843786 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0512 23:28:57.623744  843786 out.go:177] * Verifying Kubernetes components...
	I0512 23:28:57.621904  843786 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0512 23:28:57.622045  843786 config.go:178] Loaded profile config "enable-default-cni-20220512231715-516044": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 23:28:57.623916  843786 addons.go:65] Setting storage-provisioner=true in profile "enable-default-cni-20220512231715-516044"
	I0512 23:28:57.623960  843786 addons.go:153] Setting addon storage-provisioner=true in "enable-default-cni-20220512231715-516044"
	W0512 23:28:57.623975  843786 addons.go:165] addon storage-provisioner should already be in state true
	I0512 23:28:57.624038  843786 host.go:66] Checking if "enable-default-cni-20220512231715-516044" exists ...
	I0512 23:28:57.624549  843786 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220512231715-516044 --format={{.State.Status}}
	I0512 23:28:57.623921  843786 addons.go:65] Setting default-storageclass=true in profile "enable-default-cni-20220512231715-516044"
	I0512 23:28:57.624676  843786 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-20220512231715-516044"
	I0512 23:28:57.625072  843786 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220512231715-516044 --format={{.State.Status}}
	I0512 23:28:57.626451  843786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 23:28:57.669001  843786 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0512 23:28:57.670271  843786 addons.go:153] Setting addon default-storageclass=true in "enable-default-cni-20220512231715-516044"
	W0512 23:28:57.670430  843786 addons.go:165] addon default-storageclass should already be in state true
	I0512 23:28:57.670435  843786 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0512 23:28:57.670450  843786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0512 23:28:57.670461  843786 host.go:66] Checking if "enable-default-cni-20220512231715-516044" exists ...
	I0512 23:28:57.670493  843786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220512231715-516044
	I0512 23:28:57.670839  843786 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220512231715-516044 --format={{.State.Status}}
	I0512 23:28:57.714663  843786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49447 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/enable-default-cni-20220512231715-516044/id_rsa Username:docker}
	I0512 23:28:57.717213  843786 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0512 23:28:57.717242  843786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0512 23:28:57.717301  843786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220512231715-516044
	I0512 23:28:57.770811  843786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49447 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/enable-default-cni-20220512231715-516044/id_rsa Username:docker}
	I0512 23:28:57.775421  843786 node_ready.go:35] waiting up to 5m0s for node "enable-default-cni-20220512231715-516044" to be "Ready" ...
	I0512 23:28:57.775761  843786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0512 23:28:57.779533  843786 node_ready.go:49] node "enable-default-cni-20220512231715-516044" has status "Ready":"True"
	I0512 23:28:57.779560  843786 node_ready.go:38] duration metric: took 4.104315ms waiting for node "enable-default-cni-20220512231715-516044" to be "Ready" ...
	I0512 23:28:57.779573  843786 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0512 23:28:57.789277  843786 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-mn5vf" in "kube-system" namespace to be "Ready" ...
	I0512 23:28:57.982704  843786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0512 23:28:57.989667  843786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0512 23:28:58.802925  843786 pod_ready.go:92] pod "coredns-64897985d-mn5vf" in "kube-system" namespace has status "Ready":"True"
	I0512 23:28:58.802963  843786 pod_ready.go:81] duration metric: took 1.013645862s waiting for pod "coredns-64897985d-mn5vf" in "kube-system" namespace to be "Ready" ...
	I0512 23:28:58.802976  843786 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-zh8fj" in "kube-system" namespace to be "Ready" ...
	I0512 23:28:58.807214  843786 pod_ready.go:92] pod "coredns-64897985d-zh8fj" in "kube-system" namespace has status "Ready":"True"
	I0512 23:28:58.807232  843786 pod_ready.go:81] duration metric: took 4.248572ms waiting for pod "coredns-64897985d-zh8fj" in "kube-system" namespace to be "Ready" ...
	I0512 23:28:58.807244  843786 pod_ready.go:78] waiting up to 5m0s for pod "etcd-enable-default-cni-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:28:58.811274  843786 pod_ready.go:92] pod "etcd-enable-default-cni-20220512231715-516044" in "kube-system" namespace has status "Ready":"True"
	I0512 23:28:58.811293  843786 pod_ready.go:81] duration metric: took 4.042918ms waiting for pod "etcd-enable-default-cni-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:28:58.811303  843786 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-enable-default-cni-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:28:58.816660  843786 pod_ready.go:92] pod "kube-apiserver-enable-default-cni-20220512231715-516044" in "kube-system" namespace has status "Ready":"True"
	I0512 23:28:58.816679  843786 pod_ready.go:81] duration metric: took 5.368649ms waiting for pod "kube-apiserver-enable-default-cni-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:28:58.816687  843786 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-enable-default-cni-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:28:58.980205  843786 pod_ready.go:92] pod "kube-controller-manager-enable-default-cni-20220512231715-516044" in "kube-system" namespace has status "Ready":"True"
	I0512 23:28:58.980231  843786 pod_ready.go:81] duration metric: took 163.537185ms waiting for pod "kube-controller-manager-enable-default-cni-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:28:58.980244  843786 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-r96dv" in "kube-system" namespace to be "Ready" ...
	I0512 23:28:59.095449  843786 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.319656899s)
	I0512 23:28:59.095488  843786 start.go:815] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0512 23:28:59.180597  843786 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.197847052s)
	I0512 23:28:59.180676  843786 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.19096805s)
	I0512 23:28:59.182440  843786 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0512 23:28:56.027207  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:58.526504  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:55.467832  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:57.481247  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:59.967910  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:59.184008  843786 addons.go:417] enableAddons completed in 1.562115267s
	I0512 23:28:59.379643  843786 pod_ready.go:92] pod "kube-proxy-r96dv" in "kube-system" namespace has status "Ready":"True"
	I0512 23:28:59.379673  843786 pod_ready.go:81] duration metric: took 399.420983ms waiting for pod "kube-proxy-r96dv" in "kube-system" namespace to be "Ready" ...
	I0512 23:28:59.379712  843786 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-enable-default-cni-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:28:59.779346  843786 pod_ready.go:92] pod "kube-scheduler-enable-default-cni-20220512231715-516044" in "kube-system" namespace has status "Ready":"True"
	I0512 23:28:59.779369  843786 pod_ready.go:81] duration metric: took 399.613626ms waiting for pod "kube-scheduler-enable-default-cni-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:28:59.779377  843786 pod_ready.go:38] duration metric: took 1.999792048s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0512 23:28:59.779402  843786 api_server.go:51] waiting for apiserver process to appear ...
	I0512 23:28:59.779436  843786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 23:28:59.790804  843786 api_server.go:71] duration metric: took 2.168979232s to wait for apiserver process to appear ...
	I0512 23:28:59.790837  843786 api_server.go:87] waiting for apiserver healthz status ...
	I0512 23:28:59.790851  843786 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0512 23:28:59.795859  843786 api_server.go:266] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0512 23:28:59.796684  843786 api_server.go:140] control plane version: v1.23.5
	I0512 23:28:59.796706  843786 api_server.go:130] duration metric: took 5.861094ms to wait for apiserver health ...
	I0512 23:28:59.796716  843786 system_pods.go:43] waiting for kube-system pods to appear ...
	I0512 23:28:59.981517  843786 system_pods.go:59] 8 kube-system pods found
	I0512 23:28:59.981545  843786 system_pods.go:61] "coredns-64897985d-mn5vf" [e59b9e24-8d91-48b6-b7a2-18aa6d26b098] Running
	I0512 23:28:59.981550  843786 system_pods.go:61] "coredns-64897985d-zh8fj" [38aa28b4-07a7-4d06-b518-54e6d1afdc23] Running
	I0512 23:28:59.981554  843786 system_pods.go:61] "etcd-enable-default-cni-20220512231715-516044" [8fcf7497-c4ba-4ab4-a789-d9120522eca1] Running
	I0512 23:28:59.981559  843786 system_pods.go:61] "kube-apiserver-enable-default-cni-20220512231715-516044" [20d0fa35-229f-41fa-98bf-cc303bd5b9c8] Running
	I0512 23:28:59.981563  843786 system_pods.go:61] "kube-controller-manager-enable-default-cni-20220512231715-516044" [fa0c0d7a-a738-4278-8b3d-be23fdbb234a] Running
	I0512 23:28:59.981567  843786 system_pods.go:61] "kube-proxy-r96dv" [d7e73c47-eff8-4c0e-844e-f90f187e1760] Running
	I0512 23:28:59.981571  843786 system_pods.go:61] "kube-scheduler-enable-default-cni-20220512231715-516044" [7248bedc-ff56-42a2-8297-52763e4e93d3] Running
	I0512 23:28:59.981578  843786 system_pods.go:61] "storage-provisioner" [97560aba-e557-4b15-866e-79e708fac555] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0512 23:28:59.981583  843786 system_pods.go:74] duration metric: took 184.861288ms to wait for pod list to return data ...
	I0512 23:28:59.981594  843786 default_sa.go:34] waiting for default service account to be created ...
	I0512 23:29:00.178632  843786 default_sa.go:45] found service account: "default"
	I0512 23:29:00.178661  843786 default_sa.go:55] duration metric: took 197.058705ms for default service account to be created ...
	I0512 23:29:00.178689  843786 system_pods.go:116] waiting for k8s-apps to be running ...
	I0512 23:29:00.385275  843786 system_pods.go:86] 8 kube-system pods found
	I0512 23:29:00.385316  843786 system_pods.go:89] "coredns-64897985d-mn5vf" [e59b9e24-8d91-48b6-b7a2-18aa6d26b098] Running
	I0512 23:29:00.385325  843786 system_pods.go:89] "coredns-64897985d-zh8fj" [38aa28b4-07a7-4d06-b518-54e6d1afdc23] Running
	I0512 23:29:00.385332  843786 system_pods.go:89] "etcd-enable-default-cni-20220512231715-516044" [8fcf7497-c4ba-4ab4-a789-d9120522eca1] Running
	I0512 23:29:00.385339  843786 system_pods.go:89] "kube-apiserver-enable-default-cni-20220512231715-516044" [20d0fa35-229f-41fa-98bf-cc303bd5b9c8] Running
	I0512 23:29:00.385346  843786 system_pods.go:89] "kube-controller-manager-enable-default-cni-20220512231715-516044" [fa0c0d7a-a738-4278-8b3d-be23fdbb234a] Running
	I0512 23:29:00.385354  843786 system_pods.go:89] "kube-proxy-r96dv" [d7e73c47-eff8-4c0e-844e-f90f187e1760] Running
	I0512 23:29:00.385361  843786 system_pods.go:89] "kube-scheduler-enable-default-cni-20220512231715-516044" [7248bedc-ff56-42a2-8297-52763e4e93d3] Running
	I0512 23:29:00.385373  843786 system_pods.go:89] "storage-provisioner" [97560aba-e557-4b15-866e-79e708fac555] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0512 23:29:00.385391  843786 system_pods.go:126] duration metric: took 206.693489ms to wait for k8s-apps to be running ...
	I0512 23:29:00.385404  843786 system_svc.go:44] waiting for kubelet service to be running ....
	I0512 23:29:00.385465  843786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 23:29:00.398498  843786 system_svc.go:56] duration metric: took 13.082558ms WaitForService to wait for kubelet.
	I0512 23:29:00.398531  843786 kubeadm.go:548] duration metric: took 2.776714411s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0512 23:29:00.398559  843786 node_conditions.go:102] verifying NodePressure condition ...
	I0512 23:29:00.579228  843786 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0512 23:29:00.579257  843786 node_conditions.go:123] node cpu capacity is 8
	I0512 23:29:00.579269  843786 node_conditions.go:105] duration metric: took 180.705002ms to run NodePressure ...
	I0512 23:29:00.579279  843786 start.go:213] waiting for startup goroutines ...
	I0512 23:29:00.643006  843786 start.go:504] kubectl: 1.24.0, cluster: 1.23.5 (minor skew: 1)
	I0512 23:29:00.645753  843786 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-20220512231715-516044" cluster and "default" namespace by default
	I0512 23:28:58.154072  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:00.656543  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:01.025356  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:03.026550  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:01.968652  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:03.974884  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:03.154221  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:05.653524  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:05.525281  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:07.525885  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:09.525957  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:06.467854  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:08.476943  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:07.653602  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:09.654071  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:11.526121  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:13.526224  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:10.477925  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:12.967550  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:14.978574  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:12.154248  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:14.653565  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:16.653709  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:15.526389  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:18.026859  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:17.468434  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:19.476036  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:19.153833  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:21.653674  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:20.526747  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:23.025859  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:21.477906  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:23.968270  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:24.155196  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:26.653080  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:25.028740  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:27.525173  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:29.525288  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:25.977525  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:27.980380  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:28.653883  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:31.155525  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:31.525370  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:33.525647  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:30.477827  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:32.478372  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:34.976940  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:33.653792  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:35.654215  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:36.024944  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:38.026490  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:37.477931  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:39.973307  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:37.654280  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:39.654411  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:41.655082  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:40.026678  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:42.526824  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:41.978236  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:44.468770  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:44.154462  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:46.652938  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:45.026505  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:47.026790  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:49.525228  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:46.469633  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:48.477776  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:48.653988  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:51.154209  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:51.526288  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:53.527364  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:50.477809  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:52.478353  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:54.478745  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:53.653573  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:55.654602  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:56.025845  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:58.526119  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:56.968137  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:58.978195  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:58.153048  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:00.154480  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:00.526555  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:03.026212  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:00.978780  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:03.467812  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:02.654378  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:04.654423  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:05.525491  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:07.526193  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:05.478342  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:07.967819  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:09.968674  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:07.153207  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:09.154559  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:11.654885  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:10.024927  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:12.027990  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:14.525619  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:11.978136  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:14.468896  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:14.154148  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:16.156476  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:16.526445  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:19.025845  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:16.476268  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:18.478193  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:18.653377  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:21.154008  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:21.025892  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:23.026269  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:20.976047  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:23.476930  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:23.654269  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:26.154963  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:25.027037  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:27.526655  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:25.478220  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:27.976029  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:29.978870  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:28.654340  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:31.154240  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:30.025971  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:32.027387  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:34.529128  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:32.478236  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:34.977791  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:33.154371  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:35.653897  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:37.024749  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:39.526504  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:37.476346  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:39.476793  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:38.154343  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:40.653956  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:41.530576  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:44.025150  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:41.967569  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:43.967993  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:43.154009  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:45.154589  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:46.025375  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:48.026024  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:45.976150  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:48.469934  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:47.654238  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:50.153244  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:50.526517  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:53.024955  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:50.978269  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:53.467910  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:52.153879  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:54.653466  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:56.654284  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:55.025551  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:57.025734  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:59.025853  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:55.978305  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:58.477955  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:59.153865  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:01.154535  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:01.527251  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:04.024835  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:00.478561  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:02.968630  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:04.977983  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:03.654431  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:05.654492  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:06.026574  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:08.026806  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:07.468152  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:09.976356  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:08.154683  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:10.653426  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:10.525360  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:12.526262  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:11.978009  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:14.478561  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:12.653786  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:14.654115  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:15.025676  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:17.525442  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:19.525809  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:16.974816  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:18.977000  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:17.154306  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:19.654051  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:22.025894  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:24.526111  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:20.977483  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:22.978171  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:22.154406  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:24.653712  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:27.026106  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:29.524635  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:25.476942  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:27.478027  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:29.978119  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:27.154550  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:29.653464  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:31.655574  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:31.525848  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:33.526156  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:32.477599  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:34.478157  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:34.154449  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:36.155426  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:35.526679  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:38.025486  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:36.976294  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:38.977541  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:38.654636  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:40.654700  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:40.026532  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:42.524703  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:44.526214  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:41.469287  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:43.976024  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:43.154058  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:45.154167  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:47.026159  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:49.524853  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:45.976127  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:45.980535  817261 pod_ready.go:81] duration metric: took 4m0.033406573s waiting for pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace to be "Ready" ...
	E0512 23:31:45.980568  817261 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0512 23:31:45.980580  817261 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-wzwqd" in "kube-system" namespace to be "Ready" ...
	I0512 23:31:47.993055  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:47.154502  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:49.653323  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:51.653932  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:51.525654  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:53.526275  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:50.494786  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:52.992187  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:54.993340  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:53.654636  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:56.154221  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:55.526545  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:58.025410  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:57.492844  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:59.494116  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:58.653433  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:00.654304  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:00.025485  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:02.026046  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:04.026390  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:01.994121  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:03.994187  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:03.154084  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:05.653824  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:06.524790  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:08.525709  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:06.493790  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:08.992606  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:07.654155  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:10.155085  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:11.025169  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:13.025734  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:10.992702  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:12.993451  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:14.993526  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:12.653439  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:14.654706  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:15.525796  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:17.528719  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:17.528741  826131 pod_ready.go:81] duration metric: took 4m0.016989164s waiting for pod "coredns-64897985d-rqv6q" in "kube-system" namespace to be "Ready" ...
	E0512 23:32:17.528749  826131 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0512 23:32:17.528757  826131 pod_ready.go:78] waiting up to 5m0s for pod "etcd-custom-weave-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:17.532726  826131 pod_ready.go:92] pod "etcd-custom-weave-20220512231715-516044" in "kube-system" namespace has status "Ready":"True"
	I0512 23:32:17.532794  826131 pod_ready.go:81] duration metric: took 4.02892ms waiting for pod "etcd-custom-weave-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:17.532823  826131 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-custom-weave-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:17.537261  826131 pod_ready.go:92] pod "kube-apiserver-custom-weave-20220512231715-516044" in "kube-system" namespace has status "Ready":"True"
	I0512 23:32:17.537283  826131 pod_ready.go:81] duration metric: took 4.440767ms waiting for pod "kube-apiserver-custom-weave-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:17.537295  826131 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-custom-weave-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:17.541278  826131 pod_ready.go:92] pod "kube-controller-manager-custom-weave-20220512231715-516044" in "kube-system" namespace has status "Ready":"True"
	I0512 23:32:17.541296  826131 pod_ready.go:81] duration metric: took 3.994407ms waiting for pod "kube-controller-manager-custom-weave-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:17.541305  826131 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-2qmfq" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:17.922912  826131 pod_ready.go:92] pod "kube-proxy-2qmfq" in "kube-system" namespace has status "Ready":"True"
	I0512 23:32:17.922939  826131 pod_ready.go:81] duration metric: took 381.627854ms waiting for pod "kube-proxy-2qmfq" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:17.922952  826131 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-custom-weave-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:18.322734  826131 pod_ready.go:92] pod "kube-scheduler-custom-weave-20220512231715-516044" in "kube-system" namespace has status "Ready":"True"
	I0512 23:32:18.322762  826131 pod_ready.go:81] duration metric: took 399.801441ms waiting for pod "kube-scheduler-custom-weave-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:18.322776  826131 pod_ready.go:78] waiting up to 5m0s for pod "weave-net-64z47" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:17.493066  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:19.494355  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:17.154752  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:19.156423  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:21.653621  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:20.728907  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:22.730221  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:21.992271  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:23.995958  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:23.654101  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:26.153917  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:25.230961  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:27.729389  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:26.492193  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:28.496052  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:27.658212  770898 pod_ready.go:81] duration metric: took 4m0.014945263s waiting for pod "coredns-64897985d-zcth8" in "kube-system" namespace to be "Ready" ...
	E0512 23:32:27.658243  770898 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0512 23:32:27.658253  770898 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20220512231813-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:27.662690  770898 pod_ready.go:92] pod "etcd-embed-certs-20220512231813-516044" in "kube-system" namespace has status "Ready":"True"
	I0512 23:32:27.662710  770898 pod_ready.go:81] duration metric: took 4.449316ms waiting for pod "etcd-embed-certs-20220512231813-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:27.662721  770898 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20220512231813-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:27.666989  770898 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220512231813-516044" in "kube-system" namespace has status "Ready":"True"
	I0512 23:32:27.667006  770898 pod_ready.go:81] duration metric: took 4.278203ms waiting for pod "kube-apiserver-embed-certs-20220512231813-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:27.667014  770898 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20220512231813-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:27.670975  770898 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220512231813-516044" in "kube-system" namespace has status "Ready":"True"
	I0512 23:32:27.670994  770898 pod_ready.go:81] duration metric: took 3.972099ms waiting for pod "kube-controller-manager-embed-certs-20220512231813-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:27.671003  770898 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-thpfx" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:28.051694  770898 pod_ready.go:92] pod "kube-proxy-thpfx" in "kube-system" namespace has status "Ready":"True"
	I0512 23:32:28.051723  770898 pod_ready.go:81] duration metric: took 380.712904ms waiting for pod "kube-proxy-thpfx" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:28.051736  770898 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20220512231813-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:28.451574  770898 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220512231813-516044" in "kube-system" namespace has status "Ready":"True"
	I0512 23:32:28.451597  770898 pod_ready.go:81] duration metric: took 399.851675ms waiting for pod "kube-scheduler-embed-certs-20220512231813-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:28.451608  770898 pod_ready.go:38] duration metric: took 4m5.829919598s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0512 23:32:28.451669  770898 api_server.go:51] waiting for apiserver process to appear ...
	I0512 23:32:28.451737  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0512 23:32:28.497355  770898 logs.go:274] 1 containers: [48098a84d7fd]
	I0512 23:32:28.497425  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0512 23:32:28.539451  770898 logs.go:274] 1 containers: [900ff0eeacc6]
	I0512 23:32:28.539535  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0512 23:32:28.573397  770898 logs.go:274] 1 containers: [aa5767628f6c]
	I0512 23:32:28.573473  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0512 23:32:28.609617  770898 logs.go:274] 1 containers: [b2d43c18073b]
	I0512 23:32:28.609698  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0512 23:32:28.642339  770898 logs.go:274] 1 containers: [dc7bed8be1c3]
	I0512 23:32:28.642408  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0512 23:32:28.677534  770898 logs.go:274] 2 containers: [987dc4684b4b 287730a8ff0d]
	I0512 23:32:28.677609  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0512 23:32:28.711848  770898 logs.go:274] 2 containers: [727092ac44e3 acbd1356496e]
	I0512 23:32:28.711936  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0512 23:32:28.745111  770898 logs.go:274] 1 containers: [dd2291ed28a8]
	I0512 23:32:28.745161  770898 logs.go:123] Gathering logs for kube-apiserver [48098a84d7fd] ...
	I0512 23:32:28.745176  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48098a84d7fd"
	I0512 23:32:28.786494  770898 logs.go:123] Gathering logs for etcd [900ff0eeacc6] ...
	I0512 23:32:28.786527  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 900ff0eeacc6"
	I0512 23:32:28.847350  770898 logs.go:123] Gathering logs for kubernetes-dashboard [287730a8ff0d] ...
	I0512 23:32:28.847396  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 287730a8ff0d"
	I0512 23:32:28.891199  770898 logs.go:123] Gathering logs for Docker ...
	I0512 23:32:28.891235  770898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0512 23:32:28.920464  770898 logs.go:123] Gathering logs for kubelet ...
	I0512 23:32:28.920514  770898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0512 23:32:29.035032  770898 logs.go:123] Gathering logs for coredns [aa5767628f6c] ...
	I0512 23:32:29.035072  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa5767628f6c"
	I0512 23:32:29.071939  770898 logs.go:123] Gathering logs for kubernetes-dashboard [987dc4684b4b] ...
	I0512 23:32:29.071970  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 987dc4684b4b"
	I0512 23:32:29.116043  770898 logs.go:123] Gathering logs for storage-provisioner [727092ac44e3] ...
	I0512 23:32:29.116083  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727092ac44e3"
	I0512 23:32:29.156331  770898 logs.go:123] Gathering logs for dmesg ...
	I0512 23:32:29.156367  770898 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0512 23:32:29.193616  770898 logs.go:123] Gathering logs for kube-scheduler [b2d43c18073b] ...
	I0512 23:32:29.193660  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d43c18073b"
	I0512 23:32:29.244418  770898 logs.go:123] Gathering logs for storage-provisioner [acbd1356496e] ...
	I0512 23:32:29.244455  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acbd1356496e"
	I0512 23:32:29.286227  770898 logs.go:123] Gathering logs for container status ...
	I0512 23:32:29.286255  770898 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0512 23:32:29.323818  770898 logs.go:123] Gathering logs for describe nodes ...
	I0512 23:32:29.323866  770898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0512 23:32:29.528873  770898 logs.go:123] Gathering logs for kube-proxy [dc7bed8be1c3] ...
	I0512 23:32:29.528918  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc7bed8be1c3"
	I0512 23:32:29.565549  770898 logs.go:123] Gathering logs for kube-controller-manager [dd2291ed28a8] ...
	I0512 23:32:29.565584  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd2291ed28a8"
	I0512 23:32:30.229507  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:32.230528  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:30.993003  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:32.993251  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:34.993417  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:32.130138  770898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 23:32:32.141508  770898 api_server.go:71] duration metric: took 4m9.74381138s to wait for apiserver process to appear ...
	I0512 23:32:32.141542  770898 api_server.go:87] waiting for apiserver healthz status ...
	I0512 23:32:32.141612  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0512 23:32:32.174741  770898 logs.go:274] 1 containers: [48098a84d7fd]
	I0512 23:32:32.174806  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0512 23:32:32.208424  770898 logs.go:274] 1 containers: [900ff0eeacc6]
	I0512 23:32:32.208515  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0512 23:32:32.248530  770898 logs.go:274] 1 containers: [aa5767628f6c]
	I0512 23:32:32.248625  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0512 23:32:32.288568  770898 logs.go:274] 1 containers: [b2d43c18073b]
	I0512 23:32:32.288658  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0512 23:32:32.327993  770898 logs.go:274] 1 containers: [dc7bed8be1c3]
	I0512 23:32:32.328078  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0512 23:32:32.363739  770898 logs.go:274] 2 containers: [987dc4684b4b 287730a8ff0d]
	I0512 23:32:32.363826  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0512 23:32:32.401236  770898 logs.go:274] 1 containers: [727092ac44e3]
	I0512 23:32:32.401323  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0512 23:32:32.440931  770898 logs.go:274] 1 containers: [dd2291ed28a8]
	I0512 23:32:32.440985  770898 logs.go:123] Gathering logs for describe nodes ...
	I0512 23:32:32.441003  770898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0512 23:32:32.544688  770898 logs.go:123] Gathering logs for kube-apiserver [48098a84d7fd] ...
	I0512 23:32:32.544723  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48098a84d7fd"
	I0512 23:32:32.590838  770898 logs.go:123] Gathering logs for etcd [900ff0eeacc6] ...
	I0512 23:32:32.590886  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 900ff0eeacc6"
	I0512 23:32:32.664264  770898 logs.go:123] Gathering logs for kube-scheduler [b2d43c18073b] ...
	I0512 23:32:32.664310  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d43c18073b"
	I0512 23:32:32.722865  770898 logs.go:123] Gathering logs for container status ...
	I0512 23:32:32.722907  770898 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0512 23:32:32.759017  770898 logs.go:123] Gathering logs for dmesg ...
	I0512 23:32:32.759059  770898 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0512 23:32:32.804656  770898 logs.go:123] Gathering logs for coredns [aa5767628f6c] ...
	I0512 23:32:32.804700  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa5767628f6c"
	I0512 23:32:32.846176  770898 logs.go:123] Gathering logs for kube-proxy [dc7bed8be1c3] ...
	I0512 23:32:32.846205  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc7bed8be1c3"
	I0512 23:32:32.882921  770898 logs.go:123] Gathering logs for kubernetes-dashboard [287730a8ff0d] ...
	I0512 23:32:32.882955  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 287730a8ff0d"
	I0512 23:32:32.926829  770898 logs.go:123] Gathering logs for Docker ...
	I0512 23:32:32.926863  770898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0512 23:32:32.950798  770898 logs.go:123] Gathering logs for storage-provisioner [727092ac44e3] ...
	I0512 23:32:32.950838  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727092ac44e3"
	I0512 23:32:32.988067  770898 logs.go:123] Gathering logs for kubelet ...
	I0512 23:32:32.988106  770898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0512 23:32:33.102130  770898 logs.go:123] Gathering logs for kubernetes-dashboard [987dc4684b4b] ...
	I0512 23:32:33.102177  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 987dc4684b4b"
	I0512 23:32:33.144153  770898 logs.go:123] Gathering logs for kube-controller-manager [dd2291ed28a8] ...
	I0512 23:32:33.144202  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd2291ed28a8"
	I0512 23:32:35.700314  770898 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0512 23:32:35.705418  770898 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0512 23:32:35.706257  770898 api_server.go:140] control plane version: v1.23.5
	I0512 23:32:35.706281  770898 api_server.go:130] duration metric: took 3.56473069s to wait for apiserver health ...
	I0512 23:32:35.706292  770898 system_pods.go:43] waiting for kube-system pods to appear ...
	I0512 23:32:35.706347  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0512 23:32:35.739928  770898 logs.go:274] 1 containers: [48098a84d7fd]
	I0512 23:32:35.739999  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0512 23:32:35.770636  770898 logs.go:274] 1 containers: [900ff0eeacc6]
	I0512 23:32:35.770705  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0512 23:32:35.803485  770898 logs.go:274] 1 containers: [aa5767628f6c]
	I0512 23:32:35.803564  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0512 23:32:35.840458  770898 logs.go:274] 1 containers: [b2d43c18073b]
	I0512 23:32:35.840534  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0512 23:32:35.880164  770898 logs.go:274] 1 containers: [dc7bed8be1c3]
	I0512 23:32:35.880250  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0512 23:32:35.920165  770898 logs.go:274] 1 containers: [987dc4684b4b]
	I0512 23:32:35.920267  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0512 23:32:35.953726  770898 logs.go:274] 1 containers: [727092ac44e3]
	I0512 23:32:35.953810  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0512 23:32:35.987190  770898 logs.go:274] 1 containers: [dd2291ed28a8]
	I0512 23:32:35.987230  770898 logs.go:123] Gathering logs for kubelet ...
	I0512 23:32:35.987247  770898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0512 23:32:36.103312  770898 logs.go:123] Gathering logs for kube-apiserver [48098a84d7fd] ...
	I0512 23:32:36.103370  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48098a84d7fd"
	I0512 23:32:36.151474  770898 logs.go:123] Gathering logs for kube-scheduler [b2d43c18073b] ...
	I0512 23:32:36.151516  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d43c18073b"
	I0512 23:32:36.198185  770898 logs.go:123] Gathering logs for kube-controller-manager [dd2291ed28a8] ...
	I0512 23:32:36.198217  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd2291ed28a8"
	I0512 23:32:36.257601  770898 logs.go:123] Gathering logs for storage-provisioner [727092ac44e3] ...
	I0512 23:32:36.257641  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727092ac44e3"
	I0512 23:32:36.298125  770898 logs.go:123] Gathering logs for Docker ...
	I0512 23:32:36.298155  770898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0512 23:32:36.324682  770898 logs.go:123] Gathering logs for dmesg ...
	I0512 23:32:36.324718  770898 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0512 23:32:36.358743  770898 logs.go:123] Gathering logs for describe nodes ...
	I0512 23:32:36.358777  770898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0512 23:32:36.462551  770898 logs.go:123] Gathering logs for etcd [900ff0eeacc6] ...
	I0512 23:32:36.462583  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 900ff0eeacc6"
	I0512 23:32:36.548739  770898 logs.go:123] Gathering logs for coredns [aa5767628f6c] ...
	I0512 23:32:36.548780  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa5767628f6c"
	I0512 23:32:36.596416  770898 logs.go:123] Gathering logs for kube-proxy [dc7bed8be1c3] ...
	I0512 23:32:36.596465  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc7bed8be1c3"
	I0512 23:32:36.636761  770898 logs.go:123] Gathering logs for kubernetes-dashboard [987dc4684b4b] ...
	I0512 23:32:36.636791  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 987dc4684b4b"
	I0512 23:32:36.669771  770898 logs.go:123] Gathering logs for container status ...
	I0512 23:32:36.669801  770898 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0512 23:32:39.214092  770898 system_pods.go:59] 8 kube-system pods found
	I0512 23:32:39.214124  770898 system_pods.go:61] "coredns-64897985d-zcth8" [31142980-1191-40da-b252-be5993499640] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0512 23:32:39.214130  770898 system_pods.go:61] "etcd-embed-certs-20220512231813-516044" [9732c40a-dab7-458c-bed0-7bf2d845dc6d] Running
	I0512 23:32:39.214136  770898 system_pods.go:61] "kube-apiserver-embed-certs-20220512231813-516044" [0a16efda-bac5-4432-8adf-6fd5ebc0267a] Running
	I0512 23:32:39.214141  770898 system_pods.go:61] "kube-controller-manager-embed-certs-20220512231813-516044" [544284a8-77cf-480e-9860-ac60b37810bc] Running
	I0512 23:32:39.214145  770898 system_pods.go:61] "kube-proxy-thpfx" [a4570809-edf5-49ba-9973-417a32f66e0e] Running
	I0512 23:32:39.214151  770898 system_pods.go:61] "kube-scheduler-embed-certs-20220512231813-516044" [c1158183-e2cf-48cd-b6c1-13b5aa66f69b] Running
	I0512 23:32:39.214159  770898 system_pods.go:61] "metrics-server-b955d9d8-x295t" [bc8fa89e-0cc4-44b7-a83b-83a42d3ac9dc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0512 23:32:39.214171  770898 system_pods.go:61] "storage-provisioner" [043b802f-2325-4434-bf13-35dfc71b743e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0512 23:32:39.214186  770898 system_pods.go:74] duration metric: took 3.507880682s to wait for pod list to return data ...
	I0512 23:32:39.214205  770898 default_sa.go:34] waiting for default service account to be created ...
	I0512 23:32:39.216181  770898 default_sa.go:45] found service account: "default"
	I0512 23:32:39.216202  770898 default_sa.go:55] duration metric: took 1.990439ms for default service account to be created ...
	I0512 23:32:39.216210  770898 system_pods.go:116] waiting for k8s-apps to be running ...
	I0512 23:32:39.221501  770898 system_pods.go:86] 8 kube-system pods found
	I0512 23:32:39.221536  770898 system_pods.go:89] "coredns-64897985d-zcth8" [31142980-1191-40da-b252-be5993499640] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0512 23:32:39.221547  770898 system_pods.go:89] "etcd-embed-certs-20220512231813-516044" [9732c40a-dab7-458c-bed0-7bf2d845dc6d] Running
	I0512 23:32:39.221556  770898 system_pods.go:89] "kube-apiserver-embed-certs-20220512231813-516044" [0a16efda-bac5-4432-8adf-6fd5ebc0267a] Running
	I0512 23:32:39.221571  770898 system_pods.go:89] "kube-controller-manager-embed-certs-20220512231813-516044" [544284a8-77cf-480e-9860-ac60b37810bc] Running
	I0512 23:32:39.221578  770898 system_pods.go:89] "kube-proxy-thpfx" [a4570809-edf5-49ba-9973-417a32f66e0e] Running
	I0512 23:32:39.221588  770898 system_pods.go:89] "kube-scheduler-embed-certs-20220512231813-516044" [c1158183-e2cf-48cd-b6c1-13b5aa66f69b] Running
	I0512 23:32:39.221599  770898 system_pods.go:89] "metrics-server-b955d9d8-x295t" [bc8fa89e-0cc4-44b7-a83b-83a42d3ac9dc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0512 23:32:39.221614  770898 system_pods.go:89] "storage-provisioner" [043b802f-2325-4434-bf13-35dfc71b743e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0512 23:32:39.221628  770898 system_pods.go:126] duration metric: took 5.412487ms to wait for k8s-apps to be running ...
	I0512 23:32:39.221640  770898 system_svc.go:44] waiting for kubelet service to be running ....
	I0512 23:32:39.221689  770898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 23:32:39.232305  770898 system_svc.go:56] duration metric: took 10.655146ms WaitForService to wait for kubelet.
	I0512 23:32:39.232327  770898 kubeadm.go:548] duration metric: took 4m16.83463904s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0512 23:32:39.232347  770898 node_conditions.go:102] verifying NodePressure condition ...
	I0512 23:32:39.234702  770898 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0512 23:32:39.234724  770898 node_conditions.go:123] node cpu capacity is 8
	I0512 23:32:39.234735  770898 node_conditions.go:105] duration metric: took 2.381387ms to run NodePressure ...
	I0512 23:32:39.234747  770898 start.go:213] waiting for startup goroutines ...
	I0512 23:32:39.281124  770898 start.go:504] kubectl: 1.24.0, cluster: 1.23.5 (minor skew: 1)
	I0512 23:32:39.283591  770898 out.go:177] * Done! kubectl is now configured to use "embed-certs-20220512231813-516044" cluster and "default" namespace by default
	I0512 23:32:34.729190  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:36.729835  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:38.730504  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:37.493752  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:39.992662  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:41.229145  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:43.729326  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:42.493628  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:44.993359  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:45.731148  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:48.229060  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:47.493754  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:49.992930  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-05-12 23:22:47 UTC, end at Thu 2022-05-12 23:32:53 UTC. --
	May 12 23:28:42 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:28:42.516692287Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	May 12 23:28:42 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:28:42.518645398Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	May 12 23:28:49 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:28:49.681944833Z" level=info msg="ignoring event" container=5ae88164da8743d0ea266531809a940b25b20b5dc9d03aa10a01b9b8d4f777d9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 23:28:56 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:28:56.798966700Z" level=info msg="ignoring event" container=4bdf19dc44ba3982e6b7d092fb2b8d517fc2b38d45b95b137ba21631783f51f9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 23:29:06 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:29:06.524305136Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	May 12 23:29:06 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:29:06.524344791Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	May 12 23:29:06 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:29:06.526318363Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	May 12 23:29:06 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:29:06.668155814Z" level=info msg="ignoring event" container=62f3808e68323f31410c8305ac0f29ca91bfd0eedc24943c78c036262fd3aa44 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 23:29:20 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:29:20.676875170Z" level=info msg="ignoring event" container=1ae619541c0ab1e85513fe2b1d8ebf84fa8796ff4941d302376783c3cbc8f1e7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 23:29:27 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:29:27.910416439Z" level=info msg="ignoring event" container=aaf62a45a647cda77bb07cf48c3c92d08e127963a8e5fadd3eda5b68dbbc76e4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 23:29:37 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:29:37.058659820Z" level=info msg="ignoring event" container=e9e0013894675a64d6afdb5230fd1851cb716cf69e5543179ca63b6c542a8275 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 23:30:00 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:30:00.529161449Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	May 12 23:30:00 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:30:00.529205312Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	May 12 23:30:00 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:30:00.530977396Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	May 12 23:30:04 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:30:04.653627006Z" level=info msg="ignoring event" container=1cd006e9a37de8029f40473dbd09b5495fd5bab08c9ca1a99097e9f6a7ba591c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 23:30:11 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:30:11.652843089Z" level=info msg="ignoring event" container=e18bc47f2141ef5ac368b630b274bb257a3d8daebf8ee7a0012ea08fa15a3a7b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 23:30:29 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:30:29.659515351Z" level=info msg="ignoring event" container=aef32dfa43a76433bc4eff66809e2ff36bdc4dba88a4d62d5af8dce6a0ca901a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 23:31:10 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:31:10.626186505Z" level=info msg="ignoring event" container=acbd1356496e29ffa881090ca61b1473f7e070165a16ca6ea4dbf94872f6a570 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 23:31:21 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:31:21.645658726Z" level=info msg="ignoring event" container=287730a8ff0d89a4653613d92f3a53fa63ec7365d94c73570ca27abae825ec5a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 23:31:25 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:31:25.646524763Z" level=info msg="ignoring event" container=25fb116460078d0b7ed3cbeddb11ad2187e0f6d716a1000cb1186c912db539c1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 23:31:30 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:31:30.529345687Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	May 12 23:31:30 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:31:30.529382145Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	May 12 23:31:30 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:31:30.531659400Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	May 12 23:32:30 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:32:30.653012562Z" level=info msg="ignoring event" container=727092ac44e3a68485ee527acbd975bddcbbcf9a615f6a9fb9d1b45924db17cb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 23:32:34 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:32:34.643944628Z" level=info msg="ignoring event" container=987dc4684b4bcd76c2376298fe9129259eab32b8a29bb7dae4fb7dd69e2f0973 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID
	987dc4684b4bc       7fff914c4a615       49 seconds ago       Exited              kubernetes-dashboard        4                   c6b7397b134d6
	727092ac44e3a       6e38f40d628db       53 seconds ago       Exited              storage-provisioner         4                   2902fe86f9b27
	25fb116460078       a90209bb39e3d       About a minute ago   Exited              dashboard-metrics-scraper   5                   3934a8affcd12
	aa5767628f6c8       a4ca41631cc7a       4 minutes ago        Running             coredns                     0                   8410ebedb8c21
	dc7bed8be1c34       3c53fa8541f95       4 minutes ago        Running             kube-proxy                  0                   9855422fb6fb4
	b2d43c18073be       884d49d6d8c9f       4 minutes ago        Running             kube-scheduler              2                   facb532e1da11
	48098a84d7fde       3fc1d62d65872       4 minutes ago        Running             kube-apiserver              2                   14cd5a47d5076
	900ff0eeacc6f       25f8c7f3da61c       4 minutes ago        Running             etcd                        2                   ee3e11268967c
	dd2291ed28a82       b0c9e5e4dbb14       4 minutes ago        Running             kube-controller-manager     2                   84fbd3ddee7b5
	
	* 
	* ==> coredns [aa5767628f6c] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220512231813-516044
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220512231813-516044
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5812f8ec06db4997111dc3269784a7f664662f05
	                    minikube.k8s.io/name=embed-certs-20220512231813-516044
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_12T23_28_08_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 May 2022 23:28:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220512231813-516044
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 May 2022 23:32:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 May 2022 23:28:39 +0000   Thu, 12 May 2022 23:28:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 May 2022 23:28:39 +0000   Thu, 12 May 2022 23:28:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 May 2022 23:28:39 +0000   Thu, 12 May 2022 23:28:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 May 2022 23:28:39 +0000   Thu, 12 May 2022 23:28:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    embed-certs-20220512231813-516044
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	System Info:
	  Machine ID:                 1729fd8b7c184ebda96a08181510f608
	  System UUID:                03c44298-fcaf-4873-a4e1-09e2c3009e1b
	  Boot ID:                    88a64cd6-2747-4e4a-a528-ec239b8b4bba
	  Kernel Version:             5.13.0-1025-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.15
	  Kubelet Version:            v1.23.5
	  Kube-Proxy Version:         v1.23.5
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-zcth8                                      100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m32s
	  kube-system                 etcd-embed-certs-20220512231813-516044                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m45s
	  kube-system                 kube-apiserver-embed-certs-20220512231813-516044             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 kube-controller-manager-embed-certs-20220512231813-516044    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 kube-proxy-thpfx                                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  kube-system                 kube-scheduler-embed-certs-20220512231813-516044             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 metrics-server-b955d9d8-x295t                                100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m28s
	  kube-system                 storage-provisioner                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-qnw7q                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kubernetes-dashboard        kubernetes-dashboard-8469778f77-6z6nx                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (1%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 4m29s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  4m53s (x4 over 4m53s)  kubelet     Node embed-certs-20220512231813-516044 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m53s (x4 over 4m53s)  kubelet     Node embed-certs-20220512231813-516044 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m53s (x4 over 4m53s)  kubelet     Node embed-certs-20220512231813-516044 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m45s                  kubelet     Node embed-certs-20220512231813-516044 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m45s                  kubelet     Node embed-certs-20220512231813-516044 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m45s                  kubelet     Node embed-certs-20220512231813-516044 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m45s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 4m45s                  kubelet     Starting kubelet.
	  Normal  NodeReady                4m35s                  kubelet     Node embed-certs-20220512231813-516044 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 46 a8 35 07 25 08 06
	[  +3.309498] IPv4: martian source 10.85.0.89 from 10.85.0.89, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 7d b2 09 4a b8 08 06
	[  +2.608497] IPv4: martian source 10.85.0.90 from 10.85.0.90, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee 8b e8 4e 8f 9c 08 06
	[  +3.389409] IPv4: martian source 10.85.0.91 from 10.85.0.91, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 8d 85 23 32 1b 08 06
	[  +2.823721] IPv4: martian source 10.85.0.92 from 10.85.0.92, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a fa 21 4b 5f 30 08 06
	[  +2.486921] IPv4: martian source 10.85.0.93 from 10.85.0.93, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a b7 5a d7 25 ce 08 06
	[  +2.823208] IPv4: martian source 10.85.0.94 from 10.85.0.94, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 30 c7 b4 d5 50 08 06
	[  +2.957810] IPv4: martian source 10.85.0.95 from 10.85.0.95, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a e6 b0 55 2d 23 08 06
	[  +2.955499] IPv4: martian source 10.85.0.96 from 10.85.0.96, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e d9 04 40 72 80 08 06
	[  +2.356634] IPv4: martian source 10.244.0.124 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e ff 16 ff 6f 35 08 06
	[  +0.523560] IPv4: martian source 10.85.0.97 from 10.85.0.97, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 4f 5a 78 6f dc 08 06
	[  +0.495932] IPv4: martian source 10.244.0.124 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e ff 16 ff 6f 35 08 06
	[  +1.023924] IPv4: martian source 10.244.0.124 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e ff 16 ff 6f 35 08 06
	
	* 
	* ==> etcd [900ff0eeacc6] <==
	* {"level":"info","ts":"2022-05-12T23:28:25.229Z","caller":"traceutil/trace.go:171","msg":"trace[1592263554] linearizableReadLoop","detail":"{readStateIndex:474; appliedIndex:472; }","duration":"220.560715ms","start":"2022-05-12T23:28:25.009Z","end":"2022-05-12T23:28:25.229Z","steps":["trace[1592263554] 'read index received'  (duration: 100.451579ms)","trace[1592263554] 'applied index is now lower than readState.Index'  (duration: 120.107872ms)"],"step_count":2}
	{"level":"info","ts":"2022-05-12T23:28:25.229Z","caller":"traceutil/trace.go:171","msg":"trace[1616192255] transaction","detail":"{read_only:false; response_revision:459; number_of_response:1; }","duration":"221.425168ms","start":"2022-05-12T23:28:25.008Z","end":"2022-05-12T23:28:25.229Z","steps":["trace[1616192255] 'process raft request'  (duration: 101.091479ms)","trace[1616192255] 'compare'  (duration: 119.662709ms)"],"step_count":2}
	{"level":"info","ts":"2022-05-12T23:28:25.230Z","caller":"traceutil/trace.go:171","msg":"trace[1683575468] transaction","detail":"{read_only:false; response_revision:460; number_of_response:1; }","duration":"221.616422ms","start":"2022-05-12T23:28:25.008Z","end":"2022-05-12T23:28:25.230Z","steps":["trace[1683575468] 'process raft request'  (duration: 220.927227ms)"],"step_count":1}
	{"level":"info","ts":"2022-05-12T23:28:25.230Z","caller":"traceutil/trace.go:171","msg":"trace[610901355] transaction","detail":"{read_only:false; response_revision:461; number_of_response:1; }","duration":"219.970754ms","start":"2022-05-12T23:28:25.010Z","end":"2022-05-12T23:28:25.230Z","steps":["trace[610901355] 'process raft request'  (duration: 219.112986ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-12T23:28:25.230Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"221.57052ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/kubernetes-dashboard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-05-12T23:28:25.230Z","caller":"traceutil/trace.go:171","msg":"trace[800546737] range","detail":"{range_begin:/registry/clusterrolebindings/kubernetes-dashboard; range_end:; response_count:0; response_revision:461; }","duration":"221.605798ms","start":"2022-05-12T23:28:25.008Z","end":"2022-05-12T23:28:25.230Z","steps":["trace[800546737] 'agreement among raft nodes before linearized reading'  (duration: 221.520387ms)"],"step_count":1}
	{"level":"info","ts":"2022-05-12T23:28:25.539Z","caller":"traceutil/trace.go:171","msg":"trace[1580646413] linearizableReadLoop","detail":"{readStateIndex:477; appliedIndex:476; }","duration":"303.065119ms","start":"2022-05-12T23:28:25.236Z","end":"2022-05-12T23:28:25.539Z","steps":["trace[1580646413] 'read index received'  (duration: 251.236621ms)","trace[1580646413] 'applied index is now lower than readState.Index'  (duration: 51.827693ms)"],"step_count":2}
	{"level":"info","ts":"2022-05-12T23:28:25.539Z","caller":"traceutil/trace.go:171","msg":"trace[1197618302] transaction","detail":"{read_only:false; response_revision:464; number_of_response:1; }","duration":"302.08302ms","start":"2022-05-12T23:28:25.237Z","end":"2022-05-12T23:28:25.539Z","steps":["trace[1197618302] 'process raft request'  (duration: 301.990186ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-12T23:28:25.539Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"303.270868ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/kube-system/system:persistent-volume-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2022-05-12T23:28:25.539Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T23:28:25.237Z","time spent":"302.161226ms","remote":"127.0.0.1:41872","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3041,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/metrics-server-b955d9d8\" mod_revision:0 > success:<request_put:<key:\"/registry/replicasets/kube-system/metrics-server-b955d9d8\" value_size:2976 >> failure:<>"}
	{"level":"info","ts":"2022-05-12T23:28:25.539Z","caller":"traceutil/trace.go:171","msg":"trace[921791399] range","detail":"{range_begin:/registry/rolebindings/kube-system/system:persistent-volume-provisioner; range_end:; response_count:0; response_revision:464; }","duration":"303.330606ms","start":"2022-05-12T23:28:25.236Z","end":"2022-05-12T23:28:25.539Z","steps":["trace[921791399] 'agreement among raft nodes before linearized reading'  (duration: 303.220603ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-12T23:28:25.539Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T23:28:25.236Z","time spent":"303.376861ms","remote":"127.0.0.1:41838","response type":"/etcdserverpb.KV/Range","request count":0,"request size":73,"response count":0,"response size":29,"request content":"key:\"/registry/rolebindings/kube-system/system:persistent-volume-provisioner\" "}
	{"level":"warn","ts":"2022-05-12T23:28:25.539Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"302.345882ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-20220512231813-516044\" ","response":"range_response_count:1 size:4566"}
	{"level":"info","ts":"2022-05-12T23:28:25.539Z","caller":"traceutil/trace.go:171","msg":"trace[1379106560] transaction","detail":"{read_only:false; response_revision:463; number_of_response:1; }","duration":"303.399636ms","start":"2022-05-12T23:28:25.236Z","end":"2022-05-12T23:28:25.539Z","steps":["trace[1379106560] 'process raft request'  (duration: 251.3211ms)","trace[1379106560] 'compare'  (duration: 51.63098ms)"],"step_count":2}
	{"level":"info","ts":"2022-05-12T23:28:25.539Z","caller":"traceutil/trace.go:171","msg":"trace[1061428158] range","detail":"{range_begin:/registry/minions/embed-certs-20220512231813-516044; range_end:; response_count:1; response_revision:464; }","duration":"302.372231ms","start":"2022-05-12T23:28:25.237Z","end":"2022-05-12T23:28:25.539Z","steps":["trace[1061428158] 'agreement among raft nodes before linearized reading'  (duration: 302.323786ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-12T23:28:25.539Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T23:28:25.237Z","time spent":"302.401755ms","remote":"127.0.0.1:41782","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":4590,"request content":"key:\"/registry/minions/embed-certs-20220512231813-516044\" "}
	{"level":"warn","ts":"2022-05-12T23:28:25.539Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"303.615622ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/metrics-server\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2022-05-12T23:28:25.539Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"301.843757ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kubernetes-dashboard/kubernetes-dashboard-settings\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2022-05-12T23:28:25.539Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"156.283983ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2022-05-12T23:28:25.539Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T23:28:25.236Z","time spent":"303.47453ms","remote":"127.0.0.1:41786","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":220,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/serviceaccounts/kubernetes-dashboard/default\" mod_revision:455 > success:<request_put:<key:\"/registry/serviceaccounts/kubernetes-dashboard/default\" value_size:158 >> failure:<request_range:<key:\"/registry/serviceaccounts/kubernetes-dashboard/default\" > >"}
	{"level":"info","ts":"2022-05-12T23:28:25.539Z","caller":"traceutil/trace.go:171","msg":"trace[1693590207] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:464; }","duration":"156.314569ms","start":"2022-05-12T23:28:25.383Z","end":"2022-05-12T23:28:25.539Z","steps":["trace[1693590207] 'agreement among raft nodes before linearized reading'  (duration: 156.253409ms)"],"step_count":1}
	{"level":"info","ts":"2022-05-12T23:28:25.539Z","caller":"traceutil/trace.go:171","msg":"trace[599143980] range","detail":"{range_begin:/registry/configmaps/kubernetes-dashboard/kubernetes-dashboard-settings; range_end:; response_count:0; response_revision:464; }","duration":"301.881115ms","start":"2022-05-12T23:28:25.237Z","end":"2022-05-12T23:28:25.539Z","steps":["trace[599143980] 'agreement among raft nodes before linearized reading'  (duration: 301.821401ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-12T23:28:25.539Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T23:28:25.237Z","time spent":"302.012007ms","remote":"127.0.0.1:41774","response type":"/etcdserverpb.KV/Range","request count":0,"request size":73,"response count":0,"response size":29,"request content":"key:\"/registry/configmaps/kubernetes-dashboard/kubernetes-dashboard-settings\" "}
	{"level":"info","ts":"2022-05-12T23:28:25.539Z","caller":"traceutil/trace.go:171","msg":"trace[1907040849] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/metrics-server; range_end:; response_count:0; response_revision:464; }","duration":"303.640192ms","start":"2022-05-12T23:28:25.236Z","end":"2022-05-12T23:28:25.539Z","steps":["trace[1907040849] 'agreement among raft nodes before linearized reading'  (duration: 303.600301ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-12T23:28:25.540Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T23:28:25.236Z","time spent":"303.869655ms","remote":"127.0.0.1:41786","response type":"/etcdserverpb.KV/Range","request count":0,"request size":54,"response count":0,"response size":29,"request content":"key:\"/registry/serviceaccounts/kube-system/metrics-server\" "}
	
	* 
	* ==> kernel <==
	*  23:32:53 up  6:15,  0 users,  load average: 4.67, 5.59, 4.32
	Linux embed-certs-20220512231813-516044 5.13.0-1025-gcp #30~20.04.1-Ubuntu SMP Tue Apr 26 03:01:25 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [48098a84d7fd] <==
	* I0512 23:28:06.625727       1 controller.go:611] quota admission added evaluator for: endpoints
	I0512 23:28:06.636720       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0512 23:28:07.306937       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0512 23:28:07.889748       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0512 23:28:07.900543       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0512 23:28:07.917355       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0512 23:28:08.331923       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0512 23:28:20.722343       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0512 23:28:21.191351       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0512 23:28:24.093678       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0512 23:28:25.792834       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.99.236.68]
	W0512 23:28:25.914378       1 handler_proxy.go:104] no RequestInfo found in the context
	E0512 23:28:25.914455       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0512 23:28:25.914464       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0512 23:28:25.990545       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.109.187.12]
	I0512 23:28:26.028007       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.96.240.35]
	W0512 23:29:25.915318       1 handler_proxy.go:104] no RequestInfo found in the context
	E0512 23:29:25.915405       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0512 23:29:25.915423       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0512 23:31:25.916269       1 handler_proxy.go:104] no RequestInfo found in the context
	E0512 23:31:25.916372       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0512 23:31:25.916392       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [dd2291ed28a8] <==
	* I0512 23:28:25.704503       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0512 23:28:25.779229       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0512 23:28:25.779617       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0512 23:28:25.787394       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0512 23:28:25.787394       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0512 23:28:25.796236       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-qnw7q"
	I0512 23:28:25.818216       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-6z6nx"
	E0512 23:28:50.391723       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0512 23:28:50.836272       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0512 23:29:20.408782       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0512 23:29:20.853435       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0512 23:29:50.426456       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0512 23:29:50.868820       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0512 23:30:20.445543       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0512 23:30:20.890124       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0512 23:30:50.462458       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0512 23:30:50.907058       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0512 23:31:20.478855       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0512 23:31:20.925774       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0512 23:31:50.503627       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0512 23:31:50.942929       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0512 23:32:20.523487       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0512 23:32:20.957993       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0512 23:32:50.538532       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0512 23:32:50.972919       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [dc7bed8be1c3] <==
	* I0512 23:28:23.579567       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0512 23:28:23.579697       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0512 23:28:23.579757       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0512 23:28:24.089347       1 server_others.go:206] "Using iptables Proxier"
	I0512 23:28:24.089384       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0512 23:28:24.089394       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0512 23:28:24.089420       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0512 23:28:24.089786       1 server.go:656] "Version info" version="v1.23.5"
	I0512 23:28:24.090624       1 config.go:226] "Starting endpoint slice config controller"
	I0512 23:28:24.090661       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0512 23:28:24.090683       1 config.go:317] "Starting service config controller"
	I0512 23:28:24.090688       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0512 23:28:24.191510       1 shared_informer.go:247] Caches are synced for service config 
	I0512 23:28:24.191529       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [b2d43c18073b] <==
	* W0512 23:28:05.290979       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0512 23:28:05.291117       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0512 23:28:05.291368       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0512 23:28:05.291524       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0512 23:28:05.291842       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0512 23:28:05.291983       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0512 23:28:05.293109       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0512 23:28:05.293143       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0512 23:28:05.293708       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0512 23:28:05.293776       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0512 23:28:06.174717       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0512 23:28:06.174772       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0512 23:28:06.174847       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0512 23:28:06.174935       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0512 23:28:06.184356       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0512 23:28:06.184390       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0512 23:28:06.190550       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0512 23:28:06.190582       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0512 23:28:06.232668       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0512 23:28:06.232701       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0512 23:28:06.257839       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0512 23:28:06.257883       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0512 23:28:06.378295       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0512 23:28:06.378335       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0512 23:28:08.277321       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-05-12 23:22:47 UTC, end at Thu 2022-05-12 23:32:53 UTC. --
	May 12 23:32:14 embed-certs-20220512231813-516044 kubelet[4970]: I0512 23:32:14.490954    4970 scope.go:110] "RemoveContainer" containerID="25fb116460078d0b7ed3cbeddb11ad2187e0f6d716a1000cb1186c912db539c1"
	May 12 23:32:14 embed-certs-20220512231813-516044 kubelet[4970]: E0512 23:32:14.491338    4970 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-56974995fc-qnw7q_kubernetes-dashboard(e88272f4-f193-4eba-91a1-fa966d4b7483)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-qnw7q" podUID=e88272f4-f193-4eba-91a1-fa966d4b7483
	May 12 23:32:22 embed-certs-20220512231813-516044 kubelet[4970]: E0512 23:32:22.492562    4970 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-b955d9d8-x295t" podUID=bc8fa89e-0cc4-44b7-a83b-83a42d3ac9dc
	May 12 23:32:28 embed-certs-20220512231813-516044 kubelet[4970]: I0512 23:32:28.492239    4970 scope.go:110] "RemoveContainer" containerID="25fb116460078d0b7ed3cbeddb11ad2187e0f6d716a1000cb1186c912db539c1"
	May 12 23:32:28 embed-certs-20220512231813-516044 kubelet[4970]: E0512 23:32:28.492638    4970 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-56974995fc-qnw7q_kubernetes-dashboard(e88272f4-f193-4eba-91a1-fa966d4b7483)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-qnw7q" podUID=e88272f4-f193-4eba-91a1-fa966d4b7483
	May 12 23:32:31 embed-certs-20220512231813-516044 kubelet[4970]: I0512 23:32:31.103226    4970 scope.go:110] "RemoveContainer" containerID="acbd1356496e29ffa881090ca61b1473f7e070165a16ca6ea4dbf94872f6a570"
	May 12 23:32:31 embed-certs-20220512231813-516044 kubelet[4970]: I0512 23:32:31.103614    4970 scope.go:110] "RemoveContainer" containerID="727092ac44e3a68485ee527acbd975bddcbbcf9a615f6a9fb9d1b45924db17cb"
	May 12 23:32:31 embed-certs-20220512231813-516044 kubelet[4970]: E0512 23:32:31.103873    4970 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(043b802f-2325-4434-bf13-35dfc71b743e)\"" pod="kube-system/storage-provisioner" podUID=043b802f-2325-4434-bf13-35dfc71b743e
	May 12 23:32:35 embed-certs-20220512231813-516044 kubelet[4970]: I0512 23:32:35.134874    4970 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-8469778f77-6z6nx through plugin: invalid network status for"
	May 12 23:32:35 embed-certs-20220512231813-516044 kubelet[4970]: I0512 23:32:35.139893    4970 scope.go:110] "RemoveContainer" containerID="287730a8ff0d89a4653613d92f3a53fa63ec7365d94c73570ca27abae825ec5a"
	May 12 23:32:35 embed-certs-20220512231813-516044 kubelet[4970]: I0512 23:32:35.140242    4970 scope.go:110] "RemoveContainer" containerID="987dc4684b4bcd76c2376298fe9129259eab32b8a29bb7dae4fb7dd69e2f0973"
	May 12 23:32:35 embed-certs-20220512231813-516044 kubelet[4970]: E0512 23:32:35.140585    4970 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-8469778f77-6z6nx_kubernetes-dashboard(6ffbcd0f-ff86-4fbc-906e-472268aebcf5)\"" pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-6z6nx" podUID=6ffbcd0f-ff86-4fbc-906e-472268aebcf5
	May 12 23:32:35 embed-certs-20220512231813-516044 kubelet[4970]: E0512 23:32:35.492587    4970 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-b955d9d8-x295t" podUID=bc8fa89e-0cc4-44b7-a83b-83a42d3ac9dc
	May 12 23:32:36 embed-certs-20220512231813-516044 kubelet[4970]: I0512 23:32:36.147577    4970 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-8469778f77-6z6nx through plugin: invalid network status for"
	May 12 23:32:36 embed-certs-20220512231813-516044 kubelet[4970]: I0512 23:32:36.150662    4970 scope.go:110] "RemoveContainer" containerID="987dc4684b4bcd76c2376298fe9129259eab32b8a29bb7dae4fb7dd69e2f0973"
	May 12 23:32:36 embed-certs-20220512231813-516044 kubelet[4970]: E0512 23:32:36.151012    4970 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-8469778f77-6z6nx_kubernetes-dashboard(6ffbcd0f-ff86-4fbc-906e-472268aebcf5)\"" pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-6z6nx" podUID=6ffbcd0f-ff86-4fbc-906e-472268aebcf5
	May 12 23:32:40 embed-certs-20220512231813-516044 kubelet[4970]: I0512 23:32:40.491687    4970 scope.go:110] "RemoveContainer" containerID="25fb116460078d0b7ed3cbeddb11ad2187e0f6d716a1000cb1186c912db539c1"
	May 12 23:32:40 embed-certs-20220512231813-516044 kubelet[4970]: E0512 23:32:40.492096    4970 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-56974995fc-qnw7q_kubernetes-dashboard(e88272f4-f193-4eba-91a1-fa966d4b7483)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-qnw7q" podUID=e88272f4-f193-4eba-91a1-fa966d4b7483
	May 12 23:32:46 embed-certs-20220512231813-516044 kubelet[4970]: I0512 23:32:46.492275    4970 scope.go:110] "RemoveContainer" containerID="727092ac44e3a68485ee527acbd975bddcbbcf9a615f6a9fb9d1b45924db17cb"
	May 12 23:32:46 embed-certs-20220512231813-516044 kubelet[4970]: E0512 23:32:46.492532    4970 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(043b802f-2325-4434-bf13-35dfc71b743e)\"" pod="kube-system/storage-provisioner" podUID=043b802f-2325-4434-bf13-35dfc71b743e
	May 12 23:32:49 embed-certs-20220512231813-516044 kubelet[4970]: E0512 23:32:49.493141    4970 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-b955d9d8-x295t" podUID=bc8fa89e-0cc4-44b7-a83b-83a42d3ac9dc
	May 12 23:32:51 embed-certs-20220512231813-516044 kubelet[4970]: I0512 23:32:51.491160    4970 scope.go:110] "RemoveContainer" containerID="987dc4684b4bcd76c2376298fe9129259eab32b8a29bb7dae4fb7dd69e2f0973"
	May 12 23:32:51 embed-certs-20220512231813-516044 kubelet[4970]: I0512 23:32:51.491308    4970 scope.go:110] "RemoveContainer" containerID="25fb116460078d0b7ed3cbeddb11ad2187e0f6d716a1000cb1186c912db539c1"
	May 12 23:32:51 embed-certs-20220512231813-516044 kubelet[4970]: E0512 23:32:51.491567    4970 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-8469778f77-6z6nx_kubernetes-dashboard(6ffbcd0f-ff86-4fbc-906e-472268aebcf5)\"" pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-6z6nx" podUID=6ffbcd0f-ff86-4fbc-906e-472268aebcf5
	May 12 23:32:51 embed-certs-20220512231813-516044 kubelet[4970]: E0512 23:32:51.491617    4970 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-56974995fc-qnw7q_kubernetes-dashboard(e88272f4-f193-4eba-91a1-fa966d4b7483)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-qnw7q" podUID=e88272f4-f193-4eba-91a1-fa966d4b7483
	
	* 
	* ==> kubernetes-dashboard [987dc4684b4b] <==
	* 2022/05/12 23:32:04 Starting overwatch
	panic: Get "https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf": dial tcp 10.96.0.1:443: i/o timeout
	
	goroutine 1 [running]:
	github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc0005dfaf0)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:41 +0x30e
	github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:66
	github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc0001dcc00)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:527 +0x94
	github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0x194fa64)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:495 +0x32
	github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:594
	main.main()
		/home/runner/work/dashboard/dashboard/src/app/backend/dashboard.go:95 +0x1cf
	2022/05/12 23:32:04 Using namespace: kubernetes-dashboard
	2022/05/12 23:32:04 Using in-cluster config to connect to apiserver
	2022/05/12 23:32:04 Using secret token for csrf signing
	2022/05/12 23:32:04 Initializing csrf token from kubernetes-dashboard-csrf secret
	
	* 
	* ==> storage-provisioner [727092ac44e3] <==
	* I0512 23:32:00.633013       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0512 23:32:30.636455       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220512231813-516044 -n embed-certs-20220512231813-516044
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220512231813-516044 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-b955d9d8-x295t
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220512231813-516044 describe pod metrics-server-b955d9d8-x295t
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220512231813-516044 describe pod metrics-server-b955d9d8-x295t: exit status 1 (77.933656ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-b955d9d8-x295t" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220512231813-516044 describe pod metrics-server-b955d9d8-x295t: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220512231813-516044
helpers_test.go:235: (dbg) docker inspect embed-certs-20220512231813-516044:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "52dfd5b2f2abb7ed4bd3e9a159772a6400aeed0a12943d829930b5409bc2345c",
	        "Created": "2022-05-12T23:21:49.918323005Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 771187,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-12T23:22:47.672865442Z",
	            "FinishedAt": "2022-05-12T23:22:46.407249297Z"
	        },
	        "Image": "sha256:0c5d9f8f84652aecf60b51012e4dbd6b63610a21a4eff9bcda47c370186206c5",
	        "ResolvConfPath": "/var/lib/docker/containers/52dfd5b2f2abb7ed4bd3e9a159772a6400aeed0a12943d829930b5409bc2345c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/52dfd5b2f2abb7ed4bd3e9a159772a6400aeed0a12943d829930b5409bc2345c/hostname",
	        "HostsPath": "/var/lib/docker/containers/52dfd5b2f2abb7ed4bd3e9a159772a6400aeed0a12943d829930b5409bc2345c/hosts",
	        "LogPath": "/var/lib/docker/containers/52dfd5b2f2abb7ed4bd3e9a159772a6400aeed0a12943d829930b5409bc2345c/52dfd5b2f2abb7ed4bd3e9a159772a6400aeed0a12943d829930b5409bc2345c-json.log",
	        "Name": "/embed-certs-20220512231813-516044",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-20220512231813-516044:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220512231813-516044",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/65c2150b68b8e4801f989bf0b53a6d32d617c837b719163df8098f06ddf021ca-init/diff:/var/lib/docker/overlay2/ee42149b25e28b76859c4061b8e1a834d47aa37da264f16af56a871bc4d249db/diff:/var/lib/docker/overlay2/3a08ce2dbc417a00b46e55b35b8386c502b9863cda04d95f2f893823ecd7a494/diff:/var/lib/docker/overlay2/cda9560399987a3ee5f4cd2af8edc9da25932bb5258944308a15874c67cbc319/diff:/var/lib/docker/overlay2/dd36997b49a6def06e9dcfdba5f7ef14311dd1de32a9a00344c6fbd50b553096/diff:/var/lib/docker/overlay2/43d0ec81b54d9b54cded9447ec902ac225ef32b76bbf8fccb297c43987228a75/diff:/var/lib/docker/overlay2/9f402168f981cd0073442c86be65fb2d132e3a78ae03bb909ac095619edb2eb2/diff:/var/lib/docker/overlay2/28bdb0476cf6f9cd9f2a0dd3331dfd3b37522bd60b1a27bb2610dca8d8b635ea/diff:/var/lib/docker/overlay2/2a0efc3b0c7eb642b0bc0c536b3a4d17e9ac518f7aebec02e1ec05b3d428fb1f/diff:/var/lib/docker/overlay2/e0c81de4167668d18ebd9e7c18a58cc9f299fd35fb62a015b25d5a02ae58d4b5/diff:/var/lib/docker/overlay2/2a4672
624588450729b53e00431ae5907364bce3567f80f2ea728fb857326904/diff:/var/lib/docker/overlay2/0e97bfc89e61d3d62858040c785f3b284f33ac7018f4b4d33a3036c098c97e3e/diff:/var/lib/docker/overlay2/8a73a22b019c3a55efb1a43c8f75fc58d5ca41ce0e49a611f547d879b1ffda7b/diff:/var/lib/docker/overlay2/848fea1622c1b0d14632242da931bc2db1161dd5b385949342c2a2c11f51cf73/diff:/var/lib/docker/overlay2/662426b8cb54c68fc690e53b79ffdaf74b3933d049ac45ac519fe0ab9768c00f/diff:/var/lib/docker/overlay2/f6dff72be55abd7c1636a8499b17e3e9c2505335e260f6441887d32e06af996c/diff:/var/lib/docker/overlay2/1457b483d3d2b3d49d94df784f17c826976abf770d40da25d61dc4a56352f801/diff:/var/lib/docker/overlay2/80ca98bba440d041f7780aece93b713f26c9681123a38f3c217bdf2994333169/diff:/var/lib/docker/overlay2/a84cd323e14e9fe88691d66a20cc13256253fd5e9438e1a5022e264217fbc7fc/diff:/var/lib/docker/overlay2/d5d7afe5ecbe4e28e78af49b1a44fcfa61023292e194719f37a0b4ed8ca82d4d/diff:/var/lib/docker/overlay2/d1c6af58176488a61b42dbade1d4c12c7320e6076dbfb9fc854fc26d0f436679/diff:/var/lib/d
ocker/overlay2/8169f5daa2d7dd4fdcbbedcd091248fb688a46d06339f1aa324c98e3df6b5d26/diff:/var/lib/docker/overlay2/0c367bf0a6d0e5d2f91a251190951488497a3b393f33ab37c9f0babfe8c3d27c/diff:/var/lib/docker/overlay2/168a4f8c2f13b8906726264edcebcb3cbe39ed638fe32e9a7e86d757de805dfc/diff:/var/lib/docker/overlay2/02b5ef49e3dece0178b813849e23e13ac56cb2c7b86db153d97fb48938a08a9b/diff:/var/lib/docker/overlay2/c3f3206ec18f364a03b895874e2e4b5e5d41b88af889d7ab1075d05d3c1174d3/diff:/var/lib/docker/overlay2/a7d920f53ed56631d472da0b34690dc70ce9c229f4feb17079d824ed2ee615c1/diff:/var/lib/docker/overlay2/9c483ae36d1f9158f5d2295d700e238d3bf16a8e46b9ea56f075485f82c5e434/diff:/var/lib/docker/overlay2/fd0dffd16fb9881ef59350db54d0cb879e79865f92e3964915828a627495351c/diff:/var/lib/docker/overlay2/cbb9eb97bc9666f97a39e769ab1e2bc70b73aeae79d2ec92822441566e6a587a/diff:/var/lib/docker/overlay2/b2639cfc76a8b294bbc4e8ca1befbee89fb96f093a1a3612207575f286a83080/diff:/var/lib/docker/overlay2/7bcf83888007057f9e2957456442eb6bde9c8112b06bd957a538463482b
7efd9/diff:/var/lib/docker/overlay2/f983b625edec8c1a25d7703ed497666a8f3dafe6ff1ffcbd55c9dd22c6c4d21d/diff:/var/lib/docker/overlay2/6e81a73b1d45906ebc7f357663b451e1ad8e61dd2a40f7da53742dec9ea8cc56/diff:/var/lib/docker/overlay2/19b513eec8f0deed93713262477ab308f8919e10b6ea5b3a4dcc41bf1cff0825/diff:/var/lib/docker/overlay2/b9af518889b8c70b0e652ee87e07c15b2e4865af121883ed942f1170763560c4/diff:/var/lib/docker/overlay2/90a4f31f04635f43475897f90e5692b3ae5ee023a53e99fdbbf382d545dac17d/diff:/var/lib/docker/overlay2/834445e7db36584c983dc950c96c9da9e0404ca274925ad142d9c7ae3ce7661d/diff:/var/lib/docker/overlay2/19337e43fcad0841394f1284cbb0d8a67e541c2bfe557a1956357cdd76508daf/diff:/var/lib/docker/overlay2/2e54094fc1a751bb1ef3c5b293d1f9e345afa75cab14bf08ae7aa007409381c8/diff:/var/lib/docker/overlay2/709d91d3b444b7fe7ab0a34a6869392318c613c4f294ddfe0c7480222c7cb35a/diff:/var/lib/docker/overlay2/2aa3de43882a67af6abdf2af131a29c63efe1b2b4f07ec65150d80ad6a6d6574/diff:/var/lib/docker/overlay2/e6cee571b331f309878811c7521d4feb397411
90ac269e42c32e6afe955e94a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/65c2150b68b8e4801f989bf0b53a6d32d617c837b719163df8098f06ddf021ca/merged",
	                "UpperDir": "/var/lib/docker/overlay2/65c2150b68b8e4801f989bf0b53a6d32d617c837b719163df8098f06ddf021ca/diff",
	                "WorkDir": "/var/lib/docker/overlay2/65c2150b68b8e4801f989bf0b53a6d32d617c837b719163df8098f06ddf021ca/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220512231813-516044",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220512231813-516044/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220512231813-516044",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220512231813-516044",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220512231813-516044",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f26f67b6761c19fdb5f2988b1b80b00320455fb41d6ef17e1fa6248181215ce4",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49407"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49406"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49403"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49405"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49404"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f26f67b6761c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220512231813-516044": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "52dfd5b2f2ab",
	                        "embed-certs-20220512231813-516044"
	                    ],
	                    "NetworkID": "37f634322f538caf1039168a71743e1e12e3e151d83e9005d38357338f530821",
	                    "EndpointID": "991ab814fb4310fc0760f375e35794da03dab0fa496839c8e392c2c121b66cad",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220512231813-516044 -n embed-certs-20220512231813-516044
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20220512231813-516044 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-20220512231813-516044 logs -n 25: (1.28723807s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |                 Profile                  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-20220512231715-516044                              | auto-20220512231715-516044               | jenkins | v1.25.2 | 12 May 22 23:26 UTC | 12 May 22 23:26 UTC |
	|         | pgrep -a kubelet                                           |                                          |         |         |                     |                     |
	| start   | -p newest-cni-20220512232515-516044 --memory=2200          | newest-cni-20220512232515-516044         | jenkins | v1.25.2 | 12 May 22 23:26 UTC | 12 May 22 23:26 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                          |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                          |         |         |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                          |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                          |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=docker                |                                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.23.6-rc.0                          |                                          |         |         |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220512232515-516044         | jenkins | v1.25.2 | 12 May 22 23:26 UTC | 12 May 22 23:26 UTC |
	|         | newest-cni-20220512232515-516044                           |                                          |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                          |         |         |                     |                     |
	| pause   | -p                                                         | newest-cni-20220512232515-516044         | jenkins | v1.25.2 | 12 May 22 23:26 UTC | 12 May 22 23:26 UTC |
	|         | newest-cni-20220512232515-516044                           |                                          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                          |         |         |                     |                     |
	| unpause | -p                                                         | newest-cni-20220512232515-516044         | jenkins | v1.25.2 | 12 May 22 23:26 UTC | 12 May 22 23:26 UTC |
	|         | newest-cni-20220512232515-516044                           |                                          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                          |         |         |                     |                     |
	| delete  | -p auto-20220512231715-516044                              | auto-20220512231715-516044               | jenkins | v1.25.2 | 12 May 22 23:26 UTC | 12 May 22 23:26 UTC |
	| delete  | -p                                                         | newest-cni-20220512232515-516044         | jenkins | v1.25.2 | 12 May 22 23:26 UTC | 12 May 22 23:26 UTC |
	|         | newest-cni-20220512232515-516044                           |                                          |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220512232515-516044         | jenkins | v1.25.2 | 12 May 22 23:26 UTC | 12 May 22 23:26 UTC |
	|         | newest-cni-20220512232515-516044                           |                                          |         |         |                     |                     |
	| start   | -p                                                         | old-k8s-version-20220512231738-516044    | jenkins | v1.25.2 | 12 May 22 23:20 UTC | 12 May 22 23:26 UTC |
	|         | old-k8s-version-20220512231738-516044                      |                                          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                          |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                          |                                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                              |                                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                    |                                          |         |         |                     |                     |
	|         | --keep-context=false --driver=docker                       |                                          |         |         |                     |                     |
	|         |  --container-runtime=docker                                |                                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                               |                                          |         |         |                     |                     |
	| ssh     | -p                                                         | old-k8s-version-20220512231738-516044    | jenkins | v1.25.2 | 12 May 22 23:26 UTC | 12 May 22 23:26 UTC |
	|         | old-k8s-version-20220512231738-516044                      |                                          |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                          |         |         |                     |                     |
	| pause   | -p                                                         | old-k8s-version-20220512231738-516044    | jenkins | v1.25.2 | 12 May 22 23:26 UTC | 12 May 22 23:26 UTC |
	|         | old-k8s-version-20220512231738-516044                      |                                          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                          |         |         |                     |                     |
	| unpause | -p                                                         | old-k8s-version-20220512231738-516044    | jenkins | v1.25.2 | 12 May 22 23:27 UTC | 12 May 22 23:27 UTC |
	|         | old-k8s-version-20220512231738-516044                      |                                          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                          |         |         |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220512231738-516044    | jenkins | v1.25.2 | 12 May 22 23:27 UTC | 12 May 22 23:27 UTC |
	|         | old-k8s-version-20220512231738-516044                      |                                          |         |         |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220512231738-516044    | jenkins | v1.25.2 | 12 May 22 23:27 UTC | 12 May 22 23:27 UTC |
	|         | old-k8s-version-20220512231738-516044                      |                                          |         |         |                     |                     |
	| start   | -p false-20220512231715-516044                             | false-20220512231715-516044              | jenkins | v1.25.2 | 12 May 22 23:26 UTC | 12 May 22 23:27 UTC |
	|         | --memory=2048                                              |                                          |         |         |                     |                     |
	|         | --alsologtostderr                                          |                                          |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                              |                                          |         |         |                     |                     |
	|         | --cni=false --driver=docker                                |                                          |         |         |                     |                     |
	|         | --container-runtime=docker                                 |                                          |         |         |                     |                     |
	| ssh     | -p false-20220512231715-516044                             | false-20220512231715-516044              | jenkins | v1.25.2 | 12 May 22 23:27 UTC | 12 May 22 23:27 UTC |
	|         | pgrep -a kubelet                                           |                                          |         |         |                     |                     |
	| delete  | -p false-20220512231715-516044                             | false-20220512231715-516044              | jenkins | v1.25.2 | 12 May 22 23:27 UTC | 12 May 22 23:27 UTC |
	| start   | -p                                                         | cilium-20220512231715-516044             | jenkins | v1.25.2 | 12 May 22 23:26 UTC | 12 May 22 23:27 UTC |
	|         | cilium-20220512231715-516044                               |                                          |         |         |                     |                     |
	|         | --memory=2048                                              |                                          |         |         |                     |                     |
	|         | --alsologtostderr                                          |                                          |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                              |                                          |         |         |                     |                     |
	|         | --cni=cilium --driver=docker                               |                                          |         |         |                     |                     |
	|         | --container-runtime=docker                                 |                                          |         |         |                     |                     |
	| ssh     | -p                                                         | cilium-20220512231715-516044             | jenkins | v1.25.2 | 12 May 22 23:28 UTC | 12 May 22 23:28 UTC |
	|         | cilium-20220512231715-516044                               |                                          |         |         |                     |                     |
	|         | pgrep -a kubelet                                           |                                          |         |         |                     |                     |
	| delete  | -p                                                         | cilium-20220512231715-516044             | jenkins | v1.25.2 | 12 May 22 23:28 UTC | 12 May 22 23:28 UTC |
	|         | cilium-20220512231715-516044                               |                                          |         |         |                     |                     |
	| start   | -p                                                         | enable-default-cni-20220512231715-516044 | jenkins | v1.25.2 | 12 May 22 23:28 UTC | 12 May 22 23:29 UTC |
	|         | enable-default-cni-20220512231715-516044                   |                                          |         |         |                     |                     |
	|         | --memory=2048 --alsologtostderr                            |                                          |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                              |                                          |         |         |                     |                     |
	|         | --enable-default-cni=true                                  |                                          |         |         |                     |                     |
	|         | --driver=docker                                            |                                          |         |         |                     |                     |
	|         | --container-runtime=docker                                 |                                          |         |         |                     |                     |
	| ssh     | -p                                                         | enable-default-cni-20220512231715-516044 | jenkins | v1.25.2 | 12 May 22 23:29 UTC | 12 May 22 23:29 UTC |
	|         | enable-default-cni-20220512231715-516044                   |                                          |         |         |                     |                     |
	|         | pgrep -a kubelet                                           |                                          |         |         |                     |                     |
	| start   | -p                                                         | embed-certs-20220512231813-516044        | jenkins | v1.25.2 | 12 May 22 23:22 UTC | 12 May 22 23:32 UTC |
	|         | embed-certs-20220512231813-516044                          |                                          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                          |         |         |                     |                     |
	|         | --wait=true --embed-certs                                  |                                          |         |         |                     |                     |
	|         | --driver=docker                                            |                                          |         |         |                     |                     |
	|         | --container-runtime=docker                                 |                                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.23.5                               |                                          |         |         |                     |                     |
	| ssh     | -p                                                         | embed-certs-20220512231813-516044        | jenkins | v1.25.2 | 12 May 22 23:32 UTC | 12 May 22 23:32 UTC |
	|         | embed-certs-20220512231813-516044                          |                                          |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                          |         |         |                     |                     |
	| logs    | embed-certs-20220512231813-516044                          | embed-certs-20220512231813-516044        | jenkins | v1.25.2 | 12 May 22 23:32 UTC | 12 May 22 23:32 UTC |
	|         | logs -n 25                                                 |                                          |         |         |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/12 23:28:18
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.18.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0512 23:28:18.580360  843786 out.go:296] Setting OutFile to fd 1 ...
	I0512 23:28:18.580526  843786 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 23:28:18.580545  843786 out.go:309] Setting ErrFile to fd 2...
	I0512 23:28:18.580552  843786 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 23:28:18.580723  843786 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/bin
	I0512 23:28:18.581663  843786 out.go:303] Setting JSON to false
	I0512 23:28:18.584271  843786 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":22255,"bootTime":1652375844,"procs":1224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0512 23:28:18.584355  843786 start.go:125] virtualization: kvm guest
	I0512 23:28:18.586886  843786 out.go:177] * [enable-default-cni-20220512231715-516044] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0512 23:28:18.588703  843786 out.go:177]   - MINIKUBE_LOCATION=12739
	I0512 23:28:18.588680  843786 notify.go:193] Checking for updates...
	I0512 23:28:18.591585  843786 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0512 23:28:18.592981  843786 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/kubeconfig
	I0512 23:28:18.594262  843786 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube
	I0512 23:28:18.595557  843786 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0512 23:28:18.597324  843786 config.go:178] Loaded profile config "calico-20220512231715-516044": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 23:28:18.597455  843786 config.go:178] Loaded profile config "custom-weave-20220512231715-516044": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 23:28:18.597573  843786 config.go:178] Loaded profile config "embed-certs-20220512231813-516044": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 23:28:18.597648  843786 driver.go:358] Setting default libvirt URI to qemu:///system
	I0512 23:28:18.649839  843786 docker.go:137] docker version: linux-20.10.16
	I0512 23:28:18.649941  843786 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 23:28:18.814320  843786 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:93 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:49 SystemTime:2022-05-12 23:28:18.702239405 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0512 23:28:18.814428  843786 docker.go:254] overlay module found
	I0512 23:28:18.816727  843786 out.go:177] * Using the docker driver based on user configuration
	I0512 23:28:18.818128  843786 start.go:284] selected driver: docker
	I0512 23:28:18.818157  843786 start.go:806] validating driver "docker" against <nil>
	I0512 23:28:18.818183  843786 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0512 23:28:18.819403  843786 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 23:28:18.967412  843786 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:93 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:49 SystemTime:2022-05-12 23:28:18.859994416 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0512 23:28:18.967562  843786 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	E0512 23:28:18.967741  843786 start_flags.go:444] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0512 23:28:18.967763  843786 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0512 23:28:18.969521  843786 out.go:177] * Using Docker driver with the root privilege
	I0512 23:28:18.970712  843786 cni.go:95] Creating CNI manager for "bridge"
	I0512 23:28:18.970731  843786 start_flags.go:301] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0512 23:28:18.970742  843786 start_flags.go:306] config:
	{Name:enable-default-cni-20220512231715-516044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:enable-default-cni-20220512231715-516044 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0512 23:28:18.972192  843786 out.go:177] * Starting control plane node enable-default-cni-20220512231715-516044 in cluster enable-default-cni-20220512231715-516044
	I0512 23:28:18.973764  843786 cache.go:120] Beginning downloading kic base image for docker with docker
	I0512 23:28:18.975281  843786 out.go:177] * Pulling base image ...
	I0512 23:28:18.976708  843786 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 23:28:18.976761  843786 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0512 23:28:18.976776  843786 cache.go:57] Caching tarball of preloaded images
	I0512 23:28:18.976797  843786 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0512 23:28:18.977001  843786 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0512 23:28:18.977025  843786 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0512 23:28:18.977191  843786 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/config.json ...
	I0512 23:28:18.977229  843786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/config.json: {Name:mk3aae760a0be104b53421a4cae9bbbe3e51b18d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:28:19.034575  843786 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon, skipping pull
	I0512 23:28:19.034616  843786 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in daemon, skipping load
	I0512 23:28:19.034635  843786 cache.go:206] Successfully downloaded all kic artifacts
	I0512 23:28:19.034684  843786 start.go:352] acquiring machines lock for enable-default-cni-20220512231715-516044: {Name:mk38f3d3df3d3ca8fbfac4fc046f4ebec5ba2ed4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0512 23:28:19.034818  843786 start.go:356] acquired machines lock for "enable-default-cni-20220512231715-516044" in 107.959µs
	I0512 23:28:19.034854  843786 start.go:91] Provisioning new machine with config: &{Name:enable-default-cni-20220512231715-516044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:enable-default-cni-202205122317
15-516044 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0512 23:28:19.034987  843786 start.go:131] createHost starting for "" (driver="docker")
	I0512 23:28:18.118255  826131 addons.go:417] enableAddons completed in 801.104581ms
	I0512 23:28:19.527104  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:17.469878  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:19.988219  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:17.397660  770898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:17.898316  770898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:18.397901  770898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:18.897673  770898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:19.397716  770898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:19.898453  770898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:20.398381  770898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:20.898327  770898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:21.398104  770898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:21.712059  770898 kubeadm.go:1020] duration metric: took 13.642388064s to wait for elevateKubeSystemPrivileges.
	I0512 23:28:21.712101  770898 kubeadm.go:393] StartCluster complete in 5m28.329455174s
	I0512 23:28:21.712124  770898 settings.go:142] acquiring lock: {Name:mkfe717360cf8b2fa45465ab4bd68ece68561c89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:28:21.712255  770898 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/kubeconfig
	I0512 23:28:21.713784  770898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/kubeconfig: {Name:mk0f3828db53b6683822ca2fe8148b87d561cdb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:28:22.397601  770898 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220512231813-516044" rescaled to 1
	I0512 23:28:22.397663  770898 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0512 23:28:22.400985  770898 out.go:177] * Verifying Kubernetes components...
	I0512 23:28:22.397715  770898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0512 23:28:22.397713  770898 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0512 23:28:22.397949  770898 config.go:178] Loaded profile config "embed-certs-20220512231813-516044": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 23:28:22.402684  770898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 23:28:22.402725  770898 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220512231813-516044"
	I0512 23:28:22.402749  770898 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220512231813-516044"
	I0512 23:28:22.402749  770898 addons.go:65] Setting dashboard=true in profile "embed-certs-20220512231813-516044"
	I0512 23:28:22.402757  770898 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220512231813-516044"
	I0512 23:28:22.402771  770898 addons.go:153] Setting addon dashboard=true in "embed-certs-20220512231813-516044"
	I0512 23:28:22.402770  770898 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220512231813-516044"
	W0512 23:28:22.402773  770898 addons.go:165] addon storage-provisioner should already be in state true
	W0512 23:28:22.402779  770898 addons.go:165] addon dashboard should already be in state true
	W0512 23:28:22.402780  770898 addons.go:165] addon metrics-server should already be in state true
	I0512 23:28:22.402822  770898 host.go:66] Checking if "embed-certs-20220512231813-516044" exists ...
	I0512 23:28:22.402822  770898 host.go:66] Checking if "embed-certs-20220512231813-516044" exists ...
	I0512 23:28:22.402822  770898 host.go:66] Checking if "embed-certs-20220512231813-516044" exists ...
	I0512 23:28:22.402735  770898 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220512231813-516044"
	I0512 23:28:22.403182  770898 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220512231813-516044"
	I0512 23:28:22.403336  770898 cli_runner.go:164] Run: docker container inspect embed-certs-20220512231813-516044 --format={{.State.Status}}
	I0512 23:28:22.403337  770898 cli_runner.go:164] Run: docker container inspect embed-certs-20220512231813-516044 --format={{.State.Status}}
	I0512 23:28:22.403423  770898 cli_runner.go:164] Run: docker container inspect embed-certs-20220512231813-516044 --format={{.State.Status}}
	I0512 23:28:22.403453  770898 cli_runner.go:164] Run: docker container inspect embed-certs-20220512231813-516044 --format={{.State.Status}}
	I0512 23:28:22.472938  770898 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220512231813-516044"
	W0512 23:28:22.472966  770898 addons.go:165] addon default-storageclass should already be in state true
	I0512 23:28:22.472998  770898 host.go:66] Checking if "embed-certs-20220512231813-516044" exists ...
	I0512 23:28:22.473631  770898 cli_runner.go:164] Run: docker container inspect embed-certs-20220512231813-516044 --format={{.State.Status}}
	I0512 23:28:22.478472  770898 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0512 23:28:22.485295  770898 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0512 23:28:22.487807  770898 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0512 23:28:22.487831  770898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0512 23:28:22.487900  770898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220512231813-516044
	I0512 23:28:22.486659  770898 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0512 23:28:22.491978  770898 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0512 23:28:22.492004  770898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0512 23:28:22.492061  770898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220512231813-516044
	I0512 23:28:22.569275  770898 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0512 23:28:19.037770  843786 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0512 23:28:19.038064  843786 start.go:165] libmachine.API.Create for "enable-default-cni-20220512231715-516044" (driver="docker")
	I0512 23:28:19.038114  843786 client.go:168] LocalClient.Create starting
	I0512 23:28:19.038208  843786 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca.pem
	I0512 23:28:19.038252  843786 main.go:134] libmachine: Decoding PEM data...
	I0512 23:28:19.038277  843786 main.go:134] libmachine: Parsing certificate...
	I0512 23:28:19.038348  843786 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/cert.pem
	I0512 23:28:19.038379  843786 main.go:134] libmachine: Decoding PEM data...
	I0512 23:28:19.038399  843786 main.go:134] libmachine: Parsing certificate...
	I0512 23:28:19.038856  843786 cli_runner.go:164] Run: docker network inspect enable-default-cni-20220512231715-516044 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0512 23:28:19.071828  843786 cli_runner.go:211] docker network inspect enable-default-cni-20220512231715-516044 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0512 23:28:19.071906  843786 network_create.go:272] running [docker network inspect enable-default-cni-20220512231715-516044] to gather additional debugging logs...
	I0512 23:28:19.071929  843786 cli_runner.go:164] Run: docker network inspect enable-default-cni-20220512231715-516044
	W0512 23:28:19.108247  843786 cli_runner.go:211] docker network inspect enable-default-cni-20220512231715-516044 returned with exit code 1
	I0512 23:28:19.108294  843786 network_create.go:275] error running [docker network inspect enable-default-cni-20220512231715-516044]: docker network inspect enable-default-cni-20220512231715-516044: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: enable-default-cni-20220512231715-516044
	I0512 23:28:19.108313  843786 network_create.go:277] output of [docker network inspect enable-default-cni-20220512231715-516044]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: enable-default-cni-20220512231715-516044
	
	** /stderr **
	I0512 23:28:19.108370  843786 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0512 23:28:19.149543  843786 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-43829243746f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:13:6e:f4:7c}}
	I0512 23:28:19.150200  843786 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc0006c4618] misses:0}
	I0512 23:28:19.150235  843786 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 23:28:19.150251  843786 network_create.go:115] attempt to create docker network enable-default-cni-20220512231715-516044 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0512 23:28:19.150296  843786 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220512231715-516044
	I0512 23:28:19.231144  843786 network_create.go:99] docker network enable-default-cni-20220512231715-516044 192.168.58.0/24 created
	I0512 23:28:19.231187  843786 kic.go:106] calculated static IP "192.168.58.2" for the "enable-default-cni-20220512231715-516044" container
	I0512 23:28:19.231253  843786 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0512 23:28:19.267949  843786 cli_runner.go:164] Run: docker volume create enable-default-cni-20220512231715-516044 --label name.minikube.sigs.k8s.io=enable-default-cni-20220512231715-516044 --label created_by.minikube.sigs.k8s.io=true
	I0512 23:28:19.306316  843786 oci.go:103] Successfully created a docker volume enable-default-cni-20220512231715-516044
	I0512 23:28:19.306410  843786 cli_runner.go:164] Run: docker run --rm --name enable-default-cni-20220512231715-516044-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20220512231715-516044 --entrypoint /usr/bin/test -v enable-default-cni-20220512231715-516044:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c -d /var/lib
	I0512 23:28:20.114288  843786 oci.go:107] Successfully prepared a docker volume enable-default-cni-20220512231715-516044
	I0512 23:28:20.114358  843786 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 23:28:20.114384  843786 kic.go:179] Starting extracting preloaded images to volume ...
	I0512 23:28:20.114451  843786 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-20220512231715-516044:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c -I lz4 -xf /preloaded.tar -C /extractDir
	I0512 23:28:22.027467  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:24.525738  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:22.470707  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:24.968786  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:22.541272  770898 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0512 23:28:22.555910  770898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49407 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/embed-certs-20220512231813-516044/id_rsa Username:docker}
	I0512 23:28:22.563731  770898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49407 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/embed-certs-20220512231813-516044/id_rsa Username:docker}
	I0512 23:28:22.580243  770898 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0512 23:28:22.580274  770898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0512 23:28:22.580340  770898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220512231813-516044
	I0512 23:28:22.580380  770898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0512 23:28:22.580442  770898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220512231813-516044
	I0512 23:28:22.610308  770898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0512 23:28:22.610544  770898 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220512231813-516044" to be "Ready" ...
	I0512 23:28:22.621639  770898 node_ready.go:49] node "embed-certs-20220512231813-516044" has status "Ready":"True"
	I0512 23:28:22.621663  770898 node_ready.go:38] duration metric: took 11.097061ms waiting for node "embed-certs-20220512231813-516044" to be "Ready" ...
	I0512 23:28:22.621676  770898 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0512 23:28:22.629255  770898 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-4bhs4" in "kube-system" namespace to be "Ready" ...
	I0512 23:28:22.651122  770898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49407 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/embed-certs-20220512231813-516044/id_rsa Username:docker}
	I0512 23:28:22.667637  770898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49407 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/embed-certs-20220512231813-516044/id_rsa Username:docker}
	I0512 23:28:22.724293  770898 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0512 23:28:22.724332  770898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0512 23:28:22.809892  770898 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0512 23:28:22.809933  770898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0512 23:28:22.879147  770898 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0512 23:28:22.879178  770898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0512 23:28:22.886158  770898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0512 23:28:23.017996  770898 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0512 23:28:23.018026  770898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0512 23:28:23.018821  770898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0512 23:28:23.019237  770898 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0512 23:28:23.019254  770898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0512 23:28:23.073879  770898 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0512 23:28:23.073967  770898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0512 23:28:23.088176  770898 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0512 23:28:23.088209  770898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0512 23:28:23.175612  770898 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0512 23:28:23.175706  770898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0512 23:28:23.184329  770898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0512 23:28:23.279226  770898 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0512 23:28:23.279263  770898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0512 23:28:23.574749  770898 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0512 23:28:23.574798  770898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0512 23:28:23.701198  770898 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0512 23:28:23.701229  770898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0512 23:28:23.738065  770898 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0512 23:28:23.738097  770898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0512 23:28:23.782715  770898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0512 23:28:24.794225  770898 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.183871971s)
	I0512 23:28:24.794260  770898 start.go:815] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0512 23:28:24.796661  770898 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.910458295s)
	I0512 23:28:24.854445  770898 pod_ready.go:102] pod "coredns-64897985d-4bhs4" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:25.676617  770898 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.657751735s)
	I0512 23:28:25.810431  770898 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.626005558s)
	I0512 23:28:25.810589  770898 addons.go:386] Verifying addon metrics-server=true in "embed-certs-20220512231813-516044"
	I0512 23:28:26.036418  770898 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.2535963s)
	I0512 23:28:26.039945  770898 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0512 23:28:26.042306  770898 addons.go:417] enableAddons completed in 3.644590537s
	I0512 23:28:25.770407  843786 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-20220512231715-516044:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c -I lz4 -xf /preloaded.tar -C /extractDir: (5.655852092s)
	I0512 23:28:25.770451  843786 kic.go:188] duration metric: took 5.656061 seconds to extract preloaded images to volume
	W0512 23:28:25.770606  843786 cgroups_linux.go:88] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0512 23:28:25.770730  843786 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0512 23:28:25.931643  843786 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-20220512231715-516044 --name enable-default-cni-20220512231715-516044 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20220512231715-516044 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-20220512231715-516044 --network enable-default-cni-20220512231715-516044 --ip 192.168.58.2 --volume enable-default-cni-20220512231715-516044:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c
	I0512 23:28:26.725335  843786 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220512231715-516044 --format={{.State.Running}}
	I0512 23:28:26.773175  843786 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220512231715-516044 --format={{.State.Status}}
	I0512 23:28:26.827246  843786 cli_runner.go:164] Run: docker exec enable-default-cni-20220512231715-516044 stat /var/lib/dpkg/alternatives/iptables
	I0512 23:28:26.925200  843786 oci.go:144] the created container "enable-default-cni-20220512231715-516044" has a running status.
	I0512 23:28:26.925231  843786 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/enable-default-cni-20220512231715-516044/id_rsa...
	I0512 23:28:27.249017  843786 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/enable-default-cni-20220512231715-516044/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0512 23:28:27.341205  843786 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220512231715-516044 --format={{.State.Status}}
	I0512 23:28:27.380312  843786 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0512 23:28:27.380338  843786 kic_runner.go:114] Args: [docker exec --privileged enable-default-cni-20220512231715-516044 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0512 23:28:27.474148  843786 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220512231715-516044 --format={{.State.Status}}
	I0512 23:28:27.523779  843786 machine.go:88] provisioning docker machine ...
	I0512 23:28:27.523822  843786 ubuntu.go:169] provisioning hostname "enable-default-cni-20220512231715-516044"
	I0512 23:28:27.523880  843786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220512231715-516044
	I0512 23:28:27.579593  843786 main.go:134] libmachine: Using SSH client type: native
	I0512 23:28:27.579834  843786 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49447 <nil> <nil>}
	I0512 23:28:27.579863  843786 main.go:134] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-20220512231715-516044 && echo "enable-default-cni-20220512231715-516044" | sudo tee /etc/hostname
	I0512 23:28:27.738608  843786 main.go:134] libmachine: SSH cmd err, output: <nil>: enable-default-cni-20220512231715-516044
	
	I0512 23:28:27.738698  843786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220512231715-516044
	I0512 23:28:27.780020  843786 main.go:134] libmachine: Using SSH client type: native
	I0512 23:28:27.780264  843786 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49447 <nil> <nil>}
	I0512 23:28:27.780308  843786 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-20220512231715-516044' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-20220512231715-516044/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-20220512231715-516044' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0512 23:28:27.930029  843786 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0512 23:28:27.930070  843786 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/key.pem
ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube}
	I0512 23:28:27.930132  843786 ubuntu.go:177] setting up certificates
	I0512 23:28:27.930150  843786 provision.go:83] configureAuth start
	I0512 23:28:27.930215  843786 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-20220512231715-516044
	I0512 23:28:27.964762  843786 provision.go:138] copyHostCerts
	I0512 23:28:27.964849  843786 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/key.pem, removing ...
	I0512 23:28:27.964866  843786 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/key.pem
	I0512 23:28:27.964939  843786 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/key.pem (1675 bytes)
	I0512 23:28:27.965045  843786 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/ca.pem, removing ...
	I0512 23:28:27.965061  843786 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/ca.pem
	I0512 23:28:27.965121  843786 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/ca.pem (1078 bytes)
	I0512 23:28:27.965191  843786 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cert.pem, removing ...
	I0512 23:28:27.965201  843786 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cert.pem
	I0512 23:28:27.965226  843786 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cert.pem (1123 bytes)
	I0512 23:28:27.965278  843786 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-20220512231715-516044 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube enable-default-cni-20220512231715-516044]
	I0512 23:28:28.090353  843786 provision.go:172] copyRemoteCerts
	I0512 23:28:28.090425  843786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0512 23:28:28.090480  843786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220512231715-516044
	I0512 23:28:28.134647  843786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49447 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/enable-default-cni-20220512231715-516044/id_rsa Username:docker}
	I0512 23:28:28.239924  843786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0512 23:28:28.259486  843786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/server.pem --> /etc/docker/server.pem (1289 bytes)
	I0512 23:28:28.280041  843786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0512 23:28:28.305083  843786 provision.go:86] duration metric: configureAuth took 374.912037ms
	I0512 23:28:28.305160  843786 ubuntu.go:193] setting minikube options for container-runtime
	I0512 23:28:28.305382  843786 config.go:178] Loaded profile config "enable-default-cni-20220512231715-516044": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 23:28:28.305464  843786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220512231715-516044
	I0512 23:28:28.355363  843786 main.go:134] libmachine: Using SSH client type: native
	I0512 23:28:28.355558  843786 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49447 <nil> <nil>}
	I0512 23:28:28.355587  843786 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0512 23:28:28.492540  843786 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0512 23:28:28.492574  843786 ubuntu.go:71] root file system type: overlay
	I0512 23:28:28.492866  843786 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0512 23:28:28.492949  843786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220512231715-516044
	I0512 23:28:28.537790  843786 main.go:134] libmachine: Using SSH client type: native
	I0512 23:28:28.537982  843786 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49447 <nil> <nil>}
	I0512 23:28:28.538085  843786 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0512 23:28:26.529202  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:29.027012  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:26.969151  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:28.978404  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:28.696741  843786 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0512 23:28:28.696845  843786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220512231715-516044
	I0512 23:28:28.742175  843786 main.go:134] libmachine: Using SSH client type: native
	I0512 23:28:28.742364  843786 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49447 <nil> <nil>}
	I0512 23:28:28.742395  843786 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0512 23:28:29.606843  843786 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-05 13:17:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-12 23:28:28.691543147 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0512 23:28:29.606891  843786 machine.go:91] provisioned docker machine in 2.083082921s
	I0512 23:28:29.606904  843786 client.go:171] LocalClient.Create took 10.568776373s
	I0512 23:28:29.606916  843786 start.go:173] duration metric: libmachine.API.Create for "enable-default-cni-20220512231715-516044" took 10.568855897s
	I0512 23:28:29.606939  843786 start.go:306] post-start starting for "enable-default-cni-20220512231715-516044" (driver="docker")
	I0512 23:28:29.606947  843786 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0512 23:28:29.607018  843786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0512 23:28:29.607072  843786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220512231715-516044
	I0512 23:28:29.652464  843786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49447 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/enable-default-cni-20220512231715-516044/id_rsa Username:docker}
	I0512 23:28:29.751136  843786 ssh_runner.go:195] Run: cat /etc/os-release
	I0512 23:28:29.753905  843786 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0512 23:28:29.753929  843786 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0512 23:28:29.753938  843786 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0512 23:28:29.753943  843786 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0512 23:28:29.753953  843786 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/addons for local assets ...
	I0512 23:28:29.754001  843786 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/files for local assets ...
	I0512 23:28:29.754083  843786 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/files/etc/ssl/certs/5160442.pem -> 5160442.pem in /etc/ssl/certs
	I0512 23:28:29.754167  843786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0512 23:28:29.761146  843786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/files/etc/ssl/certs/5160442.pem --> /etc/ssl/certs/5160442.pem (1708 bytes)
	I0512 23:28:29.782276  843786 start.go:309] post-start completed in 175.321598ms
	I0512 23:28:29.782686  843786 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-20220512231715-516044
	I0512 23:28:29.829908  843786 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/config.json ...
	I0512 23:28:29.830487  843786 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0512 23:28:29.830552  843786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220512231715-516044
	I0512 23:28:29.875938  843786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49447 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/enable-default-cni-20220512231715-516044/id_rsa Username:docker}
	I0512 23:28:29.969741  843786 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0512 23:28:29.975669  843786 start.go:134] duration metric: createHost completed in 10.94066329s
	I0512 23:28:29.975697  843786 start.go:81] releasing machines lock for "enable-default-cni-20220512231715-516044", held for 10.940860777s
	I0512 23:28:29.975797  843786 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-20220512231715-516044
	I0512 23:28:30.027088  843786 ssh_runner.go:195] Run: systemctl --version
	I0512 23:28:30.027135  843786 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0512 23:28:30.027178  843786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220512231715-516044
	I0512 23:28:30.027200  843786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220512231715-516044
	I0512 23:28:30.070104  843786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49447 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/enable-default-cni-20220512231715-516044/id_rsa Username:docker}
	I0512 23:28:30.072460  843786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49447 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/enable-default-cni-20220512231715-516044/id_rsa Username:docker}
	I0512 23:28:30.193691  843786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0512 23:28:30.205546  843786 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 23:28:30.219357  843786 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0512 23:28:30.219426  843786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0512 23:28:30.232948  843786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0512 23:28:30.250418  843786 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0512 23:28:30.368887  843786 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0512 23:28:30.475719  843786 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 23:28:30.486865  843786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0512 23:28:30.572570  843786 ssh_runner.go:195] Run: sudo systemctl start docker
	I0512 23:28:30.583832  843786 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 23:28:30.626733  843786 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 23:28:27.143702  770898 pod_ready.go:102] pod "coredns-64897985d-4bhs4" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:27.643200  770898 pod_ready.go:92] pod "coredns-64897985d-4bhs4" in "kube-system" namespace has status "Ready":"True"
	I0512 23:28:27.643235  770898 pod_ready.go:81] duration metric: took 5.013909167s waiting for pod "coredns-64897985d-4bhs4" in "kube-system" namespace to be "Ready" ...
	I0512 23:28:27.643248  770898 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-zcth8" in "kube-system" namespace to be "Ready" ...
	I0512 23:28:29.655315  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:30.674157  843786 out.go:204] * Preparing Kubernetes v1.23.5 on Docker 20.10.15 ...
	I0512 23:28:30.674357  843786 cli_runner.go:164] Run: docker network inspect enable-default-cni-20220512231715-516044 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0512 23:28:30.722986  843786 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0512 23:28:30.727525  843786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0512 23:28:30.740190  843786 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 23:28:30.740283  843786 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0512 23:28:30.779012  843786 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0512 23:28:30.779041  843786 docker.go:541] Images already preloaded, skipping extraction
	I0512 23:28:30.779102  843786 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0512 23:28:30.821535  843786 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0512 23:28:30.821565  843786 cache_images.go:84] Images are preloaded, skipping loading
	I0512 23:28:30.821631  843786 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0512 23:28:30.925253  843786 cni.go:95] Creating CNI manager for "bridge"
	I0512 23:28:30.925290  843786 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0512 23:28:30.925308  843786 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-20220512231715-516044 NodeName:enable-default-cni-20220512231715-516044 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgrou
pfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0512 23:28:30.925534  843786 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "enable-default-cni-20220512231715-516044"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0512 23:28:30.925652  843786 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=enable-default-cni-20220512231715-516044 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:enable-default-cni-20220512231715-516044 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:}
	I0512 23:28:30.925728  843786 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0512 23:28:30.935483  843786 binaries.go:44] Found k8s binaries, skipping transfer
	I0512 23:28:30.935551  843786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0512 23:28:30.945502  843786 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I0512 23:28:30.959341  843786 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0512 23:28:30.972297  843786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2062 bytes)
	I0512 23:28:30.991361  843786 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0512 23:28:30.995409  843786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0512 23:28:31.009652  843786 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044 for IP: 192.168.58.2
	I0512 23:28:31.009772  843786 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/ca.key
	I0512 23:28:31.009822  843786 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/proxy-client-ca.key
	I0512 23:28:31.009889  843786 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/client.key
	I0512 23:28:31.009913  843786 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/client.crt with IP's: []
	I0512 23:28:31.463677  843786 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/client.crt ...
	I0512 23:28:31.463711  843786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/client.crt: {Name:mk7c09ec5a15390e46415471786b452a5023b626 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:28:31.463892  843786 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/client.key ...
	I0512 23:28:31.463909  843786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/client.key: {Name:mk374bf671fb9f3d39da9a04e035a0ef9d918f3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:28:31.464013  843786 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/apiserver.key.cee25041
	I0512 23:28:31.464029  843786 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0512 23:28:31.746329  843786 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/apiserver.crt.cee25041 ...
	I0512 23:28:31.746363  843786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/apiserver.crt.cee25041: {Name:mkb8a90dd598aec292eb807d878fef881dfb8fc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:28:31.746530  843786 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/apiserver.key.cee25041 ...
	I0512 23:28:31.746542  843786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/apiserver.key.cee25041: {Name:mk79c449876eeaaed674480c7108a6023b88f67a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:28:31.746622  843786 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/apiserver.crt
	I0512 23:28:31.746677  843786 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/apiserver.key
	I0512 23:28:31.746715  843786 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/proxy-client.key
	I0512 23:28:31.746739  843786 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/proxy-client.crt with IP's: []
	I0512 23:28:31.911624  843786 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/proxy-client.crt ...
	I0512 23:28:31.911658  843786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/proxy-client.crt: {Name:mk7685dd2a9f3e2f18f0122affc2e2e2bc85cc05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:28:31.911843  843786 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/proxy-client.key ...
	I0512 23:28:31.911860  843786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/proxy-client.key: {Name:mk9f53a886e79d975a676e06c92bd7b1a4f07e69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:28:31.912054  843786 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/516044.pem (1338 bytes)
	W0512 23:28:31.912107  843786 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/516044_empty.pem, impossibly tiny 0 bytes
	I0512 23:28:31.912127  843786 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca-key.pem (1679 bytes)
	I0512 23:28:31.912165  843786 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca.pem (1078 bytes)
	I0512 23:28:31.912205  843786 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/cert.pem (1123 bytes)
	I0512 23:28:31.912244  843786 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/key.pem (1675 bytes)
	I0512 23:28:31.912307  843786 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/files/etc/ssl/certs/5160442.pem (1708 bytes)
	I0512 23:28:31.912889  843786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0512 23:28:31.933597  843786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0512 23:28:31.954672  843786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0512 23:28:31.972715  843786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0512 23:28:31.993120  843786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0512 23:28:32.012919  843786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0512 23:28:32.035330  843786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0512 23:28:32.052534  843786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0512 23:28:32.070090  843786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0512 23:28:32.090729  843786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/516044.pem --> /usr/share/ca-certificates/516044.pem (1338 bytes)
	I0512 23:28:32.110781  843786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/files/etc/ssl/certs/5160442.pem --> /usr/share/ca-certificates/5160442.pem (1708 bytes)
	I0512 23:28:32.130616  843786 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0512 23:28:32.143950  843786 ssh_runner.go:195] Run: openssl version
	I0512 23:28:32.149501  843786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5160442.pem && ln -fs /usr/share/ca-certificates/5160442.pem /etc/ssl/certs/5160442.pem"
	I0512 23:28:32.158189  843786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5160442.pem
	I0512 23:28:32.161273  843786 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 12 22:55 /usr/share/ca-certificates/5160442.pem
	I0512 23:28:32.161325  843786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5160442.pem
	I0512 23:28:32.166749  843786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5160442.pem /etc/ssl/certs/3ec20f2e.0"
	I0512 23:28:32.174140  843786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0512 23:28:32.181587  843786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0512 23:28:32.184481  843786 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 12 22:51 /usr/share/ca-certificates/minikubeCA.pem
	I0512 23:28:32.184528  843786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0512 23:28:32.189667  843786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0512 23:28:32.197872  843786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516044.pem && ln -fs /usr/share/ca-certificates/516044.pem /etc/ssl/certs/516044.pem"
	I0512 23:28:32.205247  843786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516044.pem
	I0512 23:28:32.208252  843786 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 12 22:55 /usr/share/ca-certificates/516044.pem
	I0512 23:28:32.208295  843786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516044.pem
	I0512 23:28:32.212943  843786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516044.pem /etc/ssl/certs/51391683.0"
	I0512 23:28:32.220516  843786 kubeadm.go:391] StartCluster: {Name:enable-default-cni-20220512231715-516044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:enable-default-cni-20220512231715-516044 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false}
	I0512 23:28:32.220667  843786 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0512 23:28:32.256057  843786 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0512 23:28:32.264457  843786 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0512 23:28:32.274539  843786 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0512 23:28:32.274597  843786 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0512 23:28:32.282435  843786 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0512 23:28:32.282484  843786 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0512 23:28:32.827968  843786 out.go:204]   - Generating certificates and keys ...
	I0512 23:28:31.526546  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:34.026340  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:30.980533  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:33.476217  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:32.154642  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:34.176722  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:36.654057  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:35.419549  843786 out.go:204]   - Booting up control plane ...
	I0512 23:28:36.027893  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:38.526607  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:35.477295  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:37.480114  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:39.968161  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:39.156610  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:41.655468  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:43.463971  843786 out.go:204]   - Configuring RBAC rules ...
	I0512 23:28:43.879904  843786 cni.go:95] Creating CNI manager for "bridge"
	I0512 23:28:43.881636  843786 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0512 23:28:41.026112  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:43.529445  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:42.476255  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:44.478289  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:44.154501  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:46.653952  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:43.883104  843786 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0512 23:28:43.892269  843786 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0512 23:28:43.910251  843786 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0512 23:28:43.910387  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:43.910493  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=5812f8ec06db4997111dc3269784a7f664662f05 minikube.k8s.io/name=enable-default-cni-20220512231715-516044 minikube.k8s.io/updated_at=2022_05_12T23_28_43_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:44.393174  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:44.393247  843786 ops.go:34] apiserver oom_adj: -16
	I0512 23:28:44.965798  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:45.466023  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:45.965204  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:46.465835  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:46.965218  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:47.465889  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:47.965258  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:48.465441  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:46.024526  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:48.025394  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:46.968381  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:48.978100  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:49.154580  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:51.655662  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:48.966225  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:49.465797  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:49.966008  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:50.466189  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:50.965705  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:51.465267  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:51.965223  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:52.465267  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:52.966129  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:53.465244  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:50.026193  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:52.026387  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:54.026539  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:50.978316  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:52.978356  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:54.153898  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:56.153995  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:53.965637  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:54.465227  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:54.965220  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:55.465736  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:55.965210  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:56.465273  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:56.965957  843786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:28:57.097629  843786 kubeadm.go:1020] duration metric: took 13.187281093s to wait for elevateKubeSystemPrivileges.
	I0512 23:28:57.097674  843786 kubeadm.go:393] StartCluster complete in 24.877166703s
	I0512 23:28:57.097714  843786 settings.go:142] acquiring lock: {Name:mkfe717360cf8b2fa45465ab4bd68ece68561c89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:28:57.097876  843786 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/kubeconfig
	I0512 23:28:57.100783  843786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/kubeconfig: {Name:mk0f3828db53b6683822ca2fe8148b87d561cdb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:28:57.621636  843786 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "enable-default-cni-20220512231715-516044" rescaled to 1
	I0512 23:28:57.621768  843786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0512 23:28:57.621776  843786 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0512 23:28:57.623744  843786 out.go:177] * Verifying Kubernetes components...
	I0512 23:28:57.621904  843786 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0512 23:28:57.622045  843786 config.go:178] Loaded profile config "enable-default-cni-20220512231715-516044": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 23:28:57.623916  843786 addons.go:65] Setting storage-provisioner=true in profile "enable-default-cni-20220512231715-516044"
	I0512 23:28:57.623960  843786 addons.go:153] Setting addon storage-provisioner=true in "enable-default-cni-20220512231715-516044"
	W0512 23:28:57.623975  843786 addons.go:165] addon storage-provisioner should already be in state true
	I0512 23:28:57.624038  843786 host.go:66] Checking if "enable-default-cni-20220512231715-516044" exists ...
	I0512 23:28:57.624549  843786 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220512231715-516044 --format={{.State.Status}}
	I0512 23:28:57.623921  843786 addons.go:65] Setting default-storageclass=true in profile "enable-default-cni-20220512231715-516044"
	I0512 23:28:57.624676  843786 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-20220512231715-516044"
	I0512 23:28:57.625072  843786 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220512231715-516044 --format={{.State.Status}}
	I0512 23:28:57.626451  843786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 23:28:57.669001  843786 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0512 23:28:57.670271  843786 addons.go:153] Setting addon default-storageclass=true in "enable-default-cni-20220512231715-516044"
	W0512 23:28:57.670430  843786 addons.go:165] addon default-storageclass should already be in state true
	I0512 23:28:57.670435  843786 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0512 23:28:57.670450  843786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0512 23:28:57.670461  843786 host.go:66] Checking if "enable-default-cni-20220512231715-516044" exists ...
	I0512 23:28:57.670493  843786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220512231715-516044
	I0512 23:28:57.670839  843786 cli_runner.go:164] Run: docker container inspect enable-default-cni-20220512231715-516044 --format={{.State.Status}}
	I0512 23:28:57.714663  843786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49447 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/enable-default-cni-20220512231715-516044/id_rsa Username:docker}
	I0512 23:28:57.717213  843786 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0512 23:28:57.717242  843786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0512 23:28:57.717301  843786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220512231715-516044
	I0512 23:28:57.770811  843786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49447 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/enable-default-cni-20220512231715-516044/id_rsa Username:docker}
	I0512 23:28:57.775421  843786 node_ready.go:35] waiting up to 5m0s for node "enable-default-cni-20220512231715-516044" to be "Ready" ...
	I0512 23:28:57.775761  843786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0512 23:28:57.779533  843786 node_ready.go:49] node "enable-default-cni-20220512231715-516044" has status "Ready":"True"
	I0512 23:28:57.779560  843786 node_ready.go:38] duration metric: took 4.104315ms waiting for node "enable-default-cni-20220512231715-516044" to be "Ready" ...
	I0512 23:28:57.779573  843786 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0512 23:28:57.789277  843786 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-mn5vf" in "kube-system" namespace to be "Ready" ...
	I0512 23:28:57.982704  843786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0512 23:28:57.989667  843786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0512 23:28:58.802925  843786 pod_ready.go:92] pod "coredns-64897985d-mn5vf" in "kube-system" namespace has status "Ready":"True"
	I0512 23:28:58.802963  843786 pod_ready.go:81] duration metric: took 1.013645862s waiting for pod "coredns-64897985d-mn5vf" in "kube-system" namespace to be "Ready" ...
	I0512 23:28:58.802976  843786 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-zh8fj" in "kube-system" namespace to be "Ready" ...
	I0512 23:28:58.807214  843786 pod_ready.go:92] pod "coredns-64897985d-zh8fj" in "kube-system" namespace has status "Ready":"True"
	I0512 23:28:58.807232  843786 pod_ready.go:81] duration metric: took 4.248572ms waiting for pod "coredns-64897985d-zh8fj" in "kube-system" namespace to be "Ready" ...
	I0512 23:28:58.807244  843786 pod_ready.go:78] waiting up to 5m0s for pod "etcd-enable-default-cni-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:28:58.811274  843786 pod_ready.go:92] pod "etcd-enable-default-cni-20220512231715-516044" in "kube-system" namespace has status "Ready":"True"
	I0512 23:28:58.811293  843786 pod_ready.go:81] duration metric: took 4.042918ms waiting for pod "etcd-enable-default-cni-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:28:58.811303  843786 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-enable-default-cni-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:28:58.816660  843786 pod_ready.go:92] pod "kube-apiserver-enable-default-cni-20220512231715-516044" in "kube-system" namespace has status "Ready":"True"
	I0512 23:28:58.816679  843786 pod_ready.go:81] duration metric: took 5.368649ms waiting for pod "kube-apiserver-enable-default-cni-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:28:58.816687  843786 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-enable-default-cni-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:28:58.980205  843786 pod_ready.go:92] pod "kube-controller-manager-enable-default-cni-20220512231715-516044" in "kube-system" namespace has status "Ready":"True"
	I0512 23:28:58.980231  843786 pod_ready.go:81] duration metric: took 163.537185ms waiting for pod "kube-controller-manager-enable-default-cni-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:28:58.980244  843786 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-r96dv" in "kube-system" namespace to be "Ready" ...
	I0512 23:28:59.095449  843786 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.319656899s)
	I0512 23:28:59.095488  843786 start.go:815] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0512 23:28:59.180597  843786 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.197847052s)
	I0512 23:28:59.180676  843786 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.19096805s)
	I0512 23:28:59.182440  843786 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0512 23:28:56.027207  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:58.526504  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:55.467832  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:57.481247  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:59.967910  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:28:59.184008  843786 addons.go:417] enableAddons completed in 1.562115267s
	I0512 23:28:59.379643  843786 pod_ready.go:92] pod "kube-proxy-r96dv" in "kube-system" namespace has status "Ready":"True"
	I0512 23:28:59.379673  843786 pod_ready.go:81] duration metric: took 399.420983ms waiting for pod "kube-proxy-r96dv" in "kube-system" namespace to be "Ready" ...
	I0512 23:28:59.379712  843786 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-enable-default-cni-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:28:59.779346  843786 pod_ready.go:92] pod "kube-scheduler-enable-default-cni-20220512231715-516044" in "kube-system" namespace has status "Ready":"True"
	I0512 23:28:59.779369  843786 pod_ready.go:81] duration metric: took 399.613626ms waiting for pod "kube-scheduler-enable-default-cni-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:28:59.779377  843786 pod_ready.go:38] duration metric: took 1.999792048s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0512 23:28:59.779402  843786 api_server.go:51] waiting for apiserver process to appear ...
	I0512 23:28:59.779436  843786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 23:28:59.790804  843786 api_server.go:71] duration metric: took 2.168979232s to wait for apiserver process to appear ...
	I0512 23:28:59.790837  843786 api_server.go:87] waiting for apiserver healthz status ...
	I0512 23:28:59.790851  843786 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0512 23:28:59.795859  843786 api_server.go:266] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0512 23:28:59.796684  843786 api_server.go:140] control plane version: v1.23.5
	I0512 23:28:59.796706  843786 api_server.go:130] duration metric: took 5.861094ms to wait for apiserver health ...
	I0512 23:28:59.796716  843786 system_pods.go:43] waiting for kube-system pods to appear ...
	I0512 23:28:59.981517  843786 system_pods.go:59] 8 kube-system pods found
	I0512 23:28:59.981545  843786 system_pods.go:61] "coredns-64897985d-mn5vf" [e59b9e24-8d91-48b6-b7a2-18aa6d26b098] Running
	I0512 23:28:59.981550  843786 system_pods.go:61] "coredns-64897985d-zh8fj" [38aa28b4-07a7-4d06-b518-54e6d1afdc23] Running
	I0512 23:28:59.981554  843786 system_pods.go:61] "etcd-enable-default-cni-20220512231715-516044" [8fcf7497-c4ba-4ab4-a789-d9120522eca1] Running
	I0512 23:28:59.981559  843786 system_pods.go:61] "kube-apiserver-enable-default-cni-20220512231715-516044" [20d0fa35-229f-41fa-98bf-cc303bd5b9c8] Running
	I0512 23:28:59.981563  843786 system_pods.go:61] "kube-controller-manager-enable-default-cni-20220512231715-516044" [fa0c0d7a-a738-4278-8b3d-be23fdbb234a] Running
	I0512 23:28:59.981567  843786 system_pods.go:61] "kube-proxy-r96dv" [d7e73c47-eff8-4c0e-844e-f90f187e1760] Running
	I0512 23:28:59.981571  843786 system_pods.go:61] "kube-scheduler-enable-default-cni-20220512231715-516044" [7248bedc-ff56-42a2-8297-52763e4e93d3] Running
	I0512 23:28:59.981578  843786 system_pods.go:61] "storage-provisioner" [97560aba-e557-4b15-866e-79e708fac555] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0512 23:28:59.981583  843786 system_pods.go:74] duration metric: took 184.861288ms to wait for pod list to return data ...
	I0512 23:28:59.981594  843786 default_sa.go:34] waiting for default service account to be created ...
	I0512 23:29:00.178632  843786 default_sa.go:45] found service account: "default"
	I0512 23:29:00.178661  843786 default_sa.go:55] duration metric: took 197.058705ms for default service account to be created ...
	I0512 23:29:00.178689  843786 system_pods.go:116] waiting for k8s-apps to be running ...
	I0512 23:29:00.385275  843786 system_pods.go:86] 8 kube-system pods found
	I0512 23:29:00.385316  843786 system_pods.go:89] "coredns-64897985d-mn5vf" [e59b9e24-8d91-48b6-b7a2-18aa6d26b098] Running
	I0512 23:29:00.385325  843786 system_pods.go:89] "coredns-64897985d-zh8fj" [38aa28b4-07a7-4d06-b518-54e6d1afdc23] Running
	I0512 23:29:00.385332  843786 system_pods.go:89] "etcd-enable-default-cni-20220512231715-516044" [8fcf7497-c4ba-4ab4-a789-d9120522eca1] Running
	I0512 23:29:00.385339  843786 system_pods.go:89] "kube-apiserver-enable-default-cni-20220512231715-516044" [20d0fa35-229f-41fa-98bf-cc303bd5b9c8] Running
	I0512 23:29:00.385346  843786 system_pods.go:89] "kube-controller-manager-enable-default-cni-20220512231715-516044" [fa0c0d7a-a738-4278-8b3d-be23fdbb234a] Running
	I0512 23:29:00.385354  843786 system_pods.go:89] "kube-proxy-r96dv" [d7e73c47-eff8-4c0e-844e-f90f187e1760] Running
	I0512 23:29:00.385361  843786 system_pods.go:89] "kube-scheduler-enable-default-cni-20220512231715-516044" [7248bedc-ff56-42a2-8297-52763e4e93d3] Running
	I0512 23:29:00.385373  843786 system_pods.go:89] "storage-provisioner" [97560aba-e557-4b15-866e-79e708fac555] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0512 23:29:00.385391  843786 system_pods.go:126] duration metric: took 206.693489ms to wait for k8s-apps to be running ...
	I0512 23:29:00.385404  843786 system_svc.go:44] waiting for kubelet service to be running ....
	I0512 23:29:00.385465  843786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 23:29:00.398498  843786 system_svc.go:56] duration metric: took 13.082558ms WaitForService to wait for kubelet.
	I0512 23:29:00.398531  843786 kubeadm.go:548] duration metric: took 2.776714411s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0512 23:29:00.398559  843786 node_conditions.go:102] verifying NodePressure condition ...
	I0512 23:29:00.579228  843786 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0512 23:29:00.579257  843786 node_conditions.go:123] node cpu capacity is 8
	I0512 23:29:00.579269  843786 node_conditions.go:105] duration metric: took 180.705002ms to run NodePressure ...
	I0512 23:29:00.579279  843786 start.go:213] waiting for startup goroutines ...
	I0512 23:29:00.643006  843786 start.go:504] kubectl: 1.24.0, cluster: 1.23.5 (minor skew: 1)
	I0512 23:29:00.645753  843786 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-20220512231715-516044" cluster and "default" namespace by default
	I0512 23:28:58.154072  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:00.656543  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:01.025356  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:03.026550  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:01.968652  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:03.974884  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:03.154221  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:05.653524  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:05.525281  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:07.525885  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:09.525957  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:06.467854  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:08.476943  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:07.653602  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:09.654071  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:11.526121  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:13.526224  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:10.477925  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:12.967550  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:14.978574  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:12.154248  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:14.653565  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:16.653709  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:15.526389  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:18.026859  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:17.468434  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:19.476036  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:19.153833  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:21.653674  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:20.526747  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:23.025859  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:21.477906  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:23.968270  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:24.155196  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:26.653080  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:25.028740  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:27.525173  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:29.525288  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:25.977525  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:27.980380  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:28.653883  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:31.155525  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:31.525370  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:33.525647  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:30.477827  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:32.478372  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:34.976940  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:33.653792  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:35.654215  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:36.024944  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:38.026490  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:37.477931  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:39.973307  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:37.654280  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:39.654411  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:41.655082  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:40.026678  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:42.526824  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:41.978236  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:44.468770  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:44.154462  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:46.652938  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:45.026505  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:47.026790  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:49.525228  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:46.469633  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:48.477776  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:48.653988  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:51.154209  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:51.526288  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:53.527364  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:50.477809  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:52.478353  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:54.478745  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:53.653573  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:55.654602  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:56.025845  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:58.526119  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:56.968137  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:58.978195  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:29:58.153048  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:00.154480  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:00.526555  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:03.026212  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:00.978780  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:03.467812  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:02.654378  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:04.654423  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:05.525491  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:07.526193  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:05.478342  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:07.967819  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:09.968674  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:07.153207  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:09.154559  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:11.654885  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:10.024927  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:12.027990  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:14.525619  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:11.978136  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:14.468896  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:14.154148  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:16.156476  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:16.526445  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:19.025845  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:16.476268  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:18.478193  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:18.653377  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:21.154008  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:21.025892  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:23.026269  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:20.976047  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:23.476930  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:23.654269  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:26.154963  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:25.027037  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:27.526655  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:25.478220  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:27.976029  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:29.978870  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:28.654340  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:31.154240  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:30.025971  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:32.027387  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:34.529128  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:32.478236  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:34.977791  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:33.154371  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:35.653897  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:37.024749  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:39.526504  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:37.476346  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:39.476793  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:38.154343  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:40.653956  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:41.530576  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:44.025150  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:41.967569  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:43.967993  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:43.154009  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:45.154589  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:46.025375  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:48.026024  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:45.976150  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:48.469934  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:47.654238  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:50.153244  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:50.526517  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:53.024955  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:50.978269  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:53.467910  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:52.153879  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:54.653466  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:56.654284  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:55.025551  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:57.025734  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:59.025853  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:55.978305  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:58.477955  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:30:59.153865  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:01.154535  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:01.527251  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:04.024835  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:00.478561  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:02.968630  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:04.977983  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:03.654431  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:05.654492  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:06.026574  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:08.026806  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:07.468152  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:09.976356  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:08.154683  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:10.653426  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:10.525360  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:12.526262  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:11.978009  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:14.478561  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:12.653786  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:14.654115  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:15.025676  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:17.525442  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:19.525809  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:16.974816  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:18.977000  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:17.154306  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:19.654051  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:22.025894  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:24.526111  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:20.977483  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:22.978171  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:22.154406  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:24.653712  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:27.026106  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:29.524635  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:25.476942  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:27.478027  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:29.978119  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:27.154550  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:29.653464  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:31.655574  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:31.525848  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:33.526156  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:32.477599  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:34.478157  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:34.154449  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:36.155426  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:35.526679  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:38.025486  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:36.976294  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:38.977541  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:38.654636  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:40.654700  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:40.026532  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:42.524703  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:44.526214  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:41.469287  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:43.976024  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:43.154058  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:45.154167  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:47.026159  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:49.524853  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:45.976127  817261 pod_ready.go:102] pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:45.980535  817261 pod_ready.go:81] duration metric: took 4m0.033406573s waiting for pod "calico-kube-controllers-8594699699-njmgp" in "kube-system" namespace to be "Ready" ...
	E0512 23:31:45.980568  817261 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0512 23:31:45.980580  817261 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-wzwqd" in "kube-system" namespace to be "Ready" ...
	I0512 23:31:47.993055  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:47.154502  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:49.653323  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:51.653932  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:51.525654  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:53.526275  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:50.494786  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:52.992187  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:54.993340  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:53.654636  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:56.154221  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:55.526545  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:58.025410  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:57.492844  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:59.494116  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:31:58.653433  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:00.654304  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:00.025485  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:02.026046  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:04.026390  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:01.994121  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:03.994187  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:03.154084  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:05.653824  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:06.524790  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:08.525709  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:06.493790  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:08.992606  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:07.654155  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:10.155085  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:11.025169  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:13.025734  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:10.992702  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:12.993451  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:14.993526  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:12.653439  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:14.654706  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:15.525796  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:17.528719  826131 pod_ready.go:102] pod "coredns-64897985d-rqv6q" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:17.528741  826131 pod_ready.go:81] duration metric: took 4m0.016989164s waiting for pod "coredns-64897985d-rqv6q" in "kube-system" namespace to be "Ready" ...
	E0512 23:32:17.528749  826131 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0512 23:32:17.528757  826131 pod_ready.go:78] waiting up to 5m0s for pod "etcd-custom-weave-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:17.532726  826131 pod_ready.go:92] pod "etcd-custom-weave-20220512231715-516044" in "kube-system" namespace has status "Ready":"True"
	I0512 23:32:17.532794  826131 pod_ready.go:81] duration metric: took 4.02892ms waiting for pod "etcd-custom-weave-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:17.532823  826131 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-custom-weave-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:17.537261  826131 pod_ready.go:92] pod "kube-apiserver-custom-weave-20220512231715-516044" in "kube-system" namespace has status "Ready":"True"
	I0512 23:32:17.537283  826131 pod_ready.go:81] duration metric: took 4.440767ms waiting for pod "kube-apiserver-custom-weave-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:17.537295  826131 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-custom-weave-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:17.541278  826131 pod_ready.go:92] pod "kube-controller-manager-custom-weave-20220512231715-516044" in "kube-system" namespace has status "Ready":"True"
	I0512 23:32:17.541296  826131 pod_ready.go:81] duration metric: took 3.994407ms waiting for pod "kube-controller-manager-custom-weave-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:17.541305  826131 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-2qmfq" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:17.922912  826131 pod_ready.go:92] pod "kube-proxy-2qmfq" in "kube-system" namespace has status "Ready":"True"
	I0512 23:32:17.922939  826131 pod_ready.go:81] duration metric: took 381.627854ms waiting for pod "kube-proxy-2qmfq" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:17.922952  826131 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-custom-weave-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:18.322734  826131 pod_ready.go:92] pod "kube-scheduler-custom-weave-20220512231715-516044" in "kube-system" namespace has status "Ready":"True"
	I0512 23:32:18.322762  826131 pod_ready.go:81] duration metric: took 399.801441ms waiting for pod "kube-scheduler-custom-weave-20220512231715-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:18.322776  826131 pod_ready.go:78] waiting up to 5m0s for pod "weave-net-64z47" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:17.493066  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:19.494355  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:17.154752  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:19.156423  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:21.653621  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:20.728907  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:22.730221  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:21.992271  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:23.995958  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:23.654101  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:26.153917  770898 pod_ready.go:102] pod "coredns-64897985d-zcth8" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:25.230961  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:27.729389  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:26.492193  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:28.496052  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:27.658212  770898 pod_ready.go:81] duration metric: took 4m0.014945263s waiting for pod "coredns-64897985d-zcth8" in "kube-system" namespace to be "Ready" ...
	E0512 23:32:27.658243  770898 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0512 23:32:27.658253  770898 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20220512231813-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:27.662690  770898 pod_ready.go:92] pod "etcd-embed-certs-20220512231813-516044" in "kube-system" namespace has status "Ready":"True"
	I0512 23:32:27.662710  770898 pod_ready.go:81] duration metric: took 4.449316ms waiting for pod "etcd-embed-certs-20220512231813-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:27.662721  770898 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20220512231813-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:27.666989  770898 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220512231813-516044" in "kube-system" namespace has status "Ready":"True"
	I0512 23:32:27.667006  770898 pod_ready.go:81] duration metric: took 4.278203ms waiting for pod "kube-apiserver-embed-certs-20220512231813-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:27.667014  770898 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20220512231813-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:27.670975  770898 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220512231813-516044" in "kube-system" namespace has status "Ready":"True"
	I0512 23:32:27.670994  770898 pod_ready.go:81] duration metric: took 3.972099ms waiting for pod "kube-controller-manager-embed-certs-20220512231813-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:27.671003  770898 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-thpfx" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:28.051694  770898 pod_ready.go:92] pod "kube-proxy-thpfx" in "kube-system" namespace has status "Ready":"True"
	I0512 23:32:28.051723  770898 pod_ready.go:81] duration metric: took 380.712904ms waiting for pod "kube-proxy-thpfx" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:28.051736  770898 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20220512231813-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:28.451574  770898 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220512231813-516044" in "kube-system" namespace has status "Ready":"True"
	I0512 23:32:28.451597  770898 pod_ready.go:81] duration metric: took 399.851675ms waiting for pod "kube-scheduler-embed-certs-20220512231813-516044" in "kube-system" namespace to be "Ready" ...
	I0512 23:32:28.451608  770898 pod_ready.go:38] duration metric: took 4m5.829919598s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0512 23:32:28.451669  770898 api_server.go:51] waiting for apiserver process to appear ...
	I0512 23:32:28.451737  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0512 23:32:28.497355  770898 logs.go:274] 1 containers: [48098a84d7fd]
	I0512 23:32:28.497425  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0512 23:32:28.539451  770898 logs.go:274] 1 containers: [900ff0eeacc6]
	I0512 23:32:28.539535  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0512 23:32:28.573397  770898 logs.go:274] 1 containers: [aa5767628f6c]
	I0512 23:32:28.573473  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0512 23:32:28.609617  770898 logs.go:274] 1 containers: [b2d43c18073b]
	I0512 23:32:28.609698  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0512 23:32:28.642339  770898 logs.go:274] 1 containers: [dc7bed8be1c3]
	I0512 23:32:28.642408  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0512 23:32:28.677534  770898 logs.go:274] 2 containers: [987dc4684b4b 287730a8ff0d]
	I0512 23:32:28.677609  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0512 23:32:28.711848  770898 logs.go:274] 2 containers: [727092ac44e3 acbd1356496e]
	I0512 23:32:28.711936  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0512 23:32:28.745111  770898 logs.go:274] 1 containers: [dd2291ed28a8]
	I0512 23:32:28.745161  770898 logs.go:123] Gathering logs for kube-apiserver [48098a84d7fd] ...
	I0512 23:32:28.745176  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48098a84d7fd"
	I0512 23:32:28.786494  770898 logs.go:123] Gathering logs for etcd [900ff0eeacc6] ...
	I0512 23:32:28.786527  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 900ff0eeacc6"
	I0512 23:32:28.847350  770898 logs.go:123] Gathering logs for kubernetes-dashboard [287730a8ff0d] ...
	I0512 23:32:28.847396  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 287730a8ff0d"
	I0512 23:32:28.891199  770898 logs.go:123] Gathering logs for Docker ...
	I0512 23:32:28.891235  770898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0512 23:32:28.920464  770898 logs.go:123] Gathering logs for kubelet ...
	I0512 23:32:28.920514  770898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0512 23:32:29.035032  770898 logs.go:123] Gathering logs for coredns [aa5767628f6c] ...
	I0512 23:32:29.035072  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa5767628f6c"
	I0512 23:32:29.071939  770898 logs.go:123] Gathering logs for kubernetes-dashboard [987dc4684b4b] ...
	I0512 23:32:29.071970  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 987dc4684b4b"
	I0512 23:32:29.116043  770898 logs.go:123] Gathering logs for storage-provisioner [727092ac44e3] ...
	I0512 23:32:29.116083  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727092ac44e3"
	I0512 23:32:29.156331  770898 logs.go:123] Gathering logs for dmesg ...
	I0512 23:32:29.156367  770898 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0512 23:32:29.193616  770898 logs.go:123] Gathering logs for kube-scheduler [b2d43c18073b] ...
	I0512 23:32:29.193660  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d43c18073b"
	I0512 23:32:29.244418  770898 logs.go:123] Gathering logs for storage-provisioner [acbd1356496e] ...
	I0512 23:32:29.244455  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acbd1356496e"
	I0512 23:32:29.286227  770898 logs.go:123] Gathering logs for container status ...
	I0512 23:32:29.286255  770898 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0512 23:32:29.323818  770898 logs.go:123] Gathering logs for describe nodes ...
	I0512 23:32:29.323866  770898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0512 23:32:29.528873  770898 logs.go:123] Gathering logs for kube-proxy [dc7bed8be1c3] ...
	I0512 23:32:29.528918  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc7bed8be1c3"
	I0512 23:32:29.565549  770898 logs.go:123] Gathering logs for kube-controller-manager [dd2291ed28a8] ...
	I0512 23:32:29.565584  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd2291ed28a8"
	I0512 23:32:30.229507  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:32.230528  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:30.993003  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:32.993251  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:34.993417  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:32.130138  770898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 23:32:32.141508  770898 api_server.go:71] duration metric: took 4m9.74381138s to wait for apiserver process to appear ...
	I0512 23:32:32.141542  770898 api_server.go:87] waiting for apiserver healthz status ...
	I0512 23:32:32.141612  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0512 23:32:32.174741  770898 logs.go:274] 1 containers: [48098a84d7fd]
	I0512 23:32:32.174806  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0512 23:32:32.208424  770898 logs.go:274] 1 containers: [900ff0eeacc6]
	I0512 23:32:32.208515  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0512 23:32:32.248530  770898 logs.go:274] 1 containers: [aa5767628f6c]
	I0512 23:32:32.248625  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0512 23:32:32.288568  770898 logs.go:274] 1 containers: [b2d43c18073b]
	I0512 23:32:32.288658  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0512 23:32:32.327993  770898 logs.go:274] 1 containers: [dc7bed8be1c3]
	I0512 23:32:32.328078  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0512 23:32:32.363739  770898 logs.go:274] 2 containers: [987dc4684b4b 287730a8ff0d]
	I0512 23:32:32.363826  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0512 23:32:32.401236  770898 logs.go:274] 1 containers: [727092ac44e3]
	I0512 23:32:32.401323  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0512 23:32:32.440931  770898 logs.go:274] 1 containers: [dd2291ed28a8]
	I0512 23:32:32.440985  770898 logs.go:123] Gathering logs for describe nodes ...
	I0512 23:32:32.441003  770898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0512 23:32:32.544688  770898 logs.go:123] Gathering logs for kube-apiserver [48098a84d7fd] ...
	I0512 23:32:32.544723  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48098a84d7fd"
	I0512 23:32:32.590838  770898 logs.go:123] Gathering logs for etcd [900ff0eeacc6] ...
	I0512 23:32:32.590886  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 900ff0eeacc6"
	I0512 23:32:32.664264  770898 logs.go:123] Gathering logs for kube-scheduler [b2d43c18073b] ...
	I0512 23:32:32.664310  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d43c18073b"
	I0512 23:32:32.722865  770898 logs.go:123] Gathering logs for container status ...
	I0512 23:32:32.722907  770898 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0512 23:32:32.759017  770898 logs.go:123] Gathering logs for dmesg ...
	I0512 23:32:32.759059  770898 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0512 23:32:32.804656  770898 logs.go:123] Gathering logs for coredns [aa5767628f6c] ...
	I0512 23:32:32.804700  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa5767628f6c"
	I0512 23:32:32.846176  770898 logs.go:123] Gathering logs for kube-proxy [dc7bed8be1c3] ...
	I0512 23:32:32.846205  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc7bed8be1c3"
	I0512 23:32:32.882921  770898 logs.go:123] Gathering logs for kubernetes-dashboard [287730a8ff0d] ...
	I0512 23:32:32.882955  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 287730a8ff0d"
	I0512 23:32:32.926829  770898 logs.go:123] Gathering logs for Docker ...
	I0512 23:32:32.926863  770898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0512 23:32:32.950798  770898 logs.go:123] Gathering logs for storage-provisioner [727092ac44e3] ...
	I0512 23:32:32.950838  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727092ac44e3"
	I0512 23:32:32.988067  770898 logs.go:123] Gathering logs for kubelet ...
	I0512 23:32:32.988106  770898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0512 23:32:33.102130  770898 logs.go:123] Gathering logs for kubernetes-dashboard [987dc4684b4b] ...
	I0512 23:32:33.102177  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 987dc4684b4b"
	I0512 23:32:33.144153  770898 logs.go:123] Gathering logs for kube-controller-manager [dd2291ed28a8] ...
	I0512 23:32:33.144202  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd2291ed28a8"
	I0512 23:32:35.700314  770898 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0512 23:32:35.705418  770898 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0512 23:32:35.706257  770898 api_server.go:140] control plane version: v1.23.5
	I0512 23:32:35.706281  770898 api_server.go:130] duration metric: took 3.56473069s to wait for apiserver health ...
	I0512 23:32:35.706292  770898 system_pods.go:43] waiting for kube-system pods to appear ...
	I0512 23:32:35.706347  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0512 23:32:35.739928  770898 logs.go:274] 1 containers: [48098a84d7fd]
	I0512 23:32:35.739999  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0512 23:32:35.770636  770898 logs.go:274] 1 containers: [900ff0eeacc6]
	I0512 23:32:35.770705  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0512 23:32:35.803485  770898 logs.go:274] 1 containers: [aa5767628f6c]
	I0512 23:32:35.803564  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0512 23:32:35.840458  770898 logs.go:274] 1 containers: [b2d43c18073b]
	I0512 23:32:35.840534  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0512 23:32:35.880164  770898 logs.go:274] 1 containers: [dc7bed8be1c3]
	I0512 23:32:35.880250  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0512 23:32:35.920165  770898 logs.go:274] 1 containers: [987dc4684b4b]
	I0512 23:32:35.920267  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0512 23:32:35.953726  770898 logs.go:274] 1 containers: [727092ac44e3]
	I0512 23:32:35.953810  770898 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0512 23:32:35.987190  770898 logs.go:274] 1 containers: [dd2291ed28a8]
	I0512 23:32:35.987230  770898 logs.go:123] Gathering logs for kubelet ...
	I0512 23:32:35.987247  770898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0512 23:32:36.103312  770898 logs.go:123] Gathering logs for kube-apiserver [48098a84d7fd] ...
	I0512 23:32:36.103370  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48098a84d7fd"
	I0512 23:32:36.151474  770898 logs.go:123] Gathering logs for kube-scheduler [b2d43c18073b] ...
	I0512 23:32:36.151516  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d43c18073b"
	I0512 23:32:36.198185  770898 logs.go:123] Gathering logs for kube-controller-manager [dd2291ed28a8] ...
	I0512 23:32:36.198217  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd2291ed28a8"
	I0512 23:32:36.257601  770898 logs.go:123] Gathering logs for storage-provisioner [727092ac44e3] ...
	I0512 23:32:36.257641  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727092ac44e3"
	I0512 23:32:36.298125  770898 logs.go:123] Gathering logs for Docker ...
	I0512 23:32:36.298155  770898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0512 23:32:36.324682  770898 logs.go:123] Gathering logs for dmesg ...
	I0512 23:32:36.324718  770898 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0512 23:32:36.358743  770898 logs.go:123] Gathering logs for describe nodes ...
	I0512 23:32:36.358777  770898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0512 23:32:36.462551  770898 logs.go:123] Gathering logs for etcd [900ff0eeacc6] ...
	I0512 23:32:36.462583  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 900ff0eeacc6"
	I0512 23:32:36.548739  770898 logs.go:123] Gathering logs for coredns [aa5767628f6c] ...
	I0512 23:32:36.548780  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa5767628f6c"
	I0512 23:32:36.596416  770898 logs.go:123] Gathering logs for kube-proxy [dc7bed8be1c3] ...
	I0512 23:32:36.596465  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc7bed8be1c3"
	I0512 23:32:36.636761  770898 logs.go:123] Gathering logs for kubernetes-dashboard [987dc4684b4b] ...
	I0512 23:32:36.636791  770898 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 987dc4684b4b"
	I0512 23:32:36.669771  770898 logs.go:123] Gathering logs for container status ...
	I0512 23:32:36.669801  770898 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0512 23:32:39.214092  770898 system_pods.go:59] 8 kube-system pods found
	I0512 23:32:39.214124  770898 system_pods.go:61] "coredns-64897985d-zcth8" [31142980-1191-40da-b252-be5993499640] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0512 23:32:39.214130  770898 system_pods.go:61] "etcd-embed-certs-20220512231813-516044" [9732c40a-dab7-458c-bed0-7bf2d845dc6d] Running
	I0512 23:32:39.214136  770898 system_pods.go:61] "kube-apiserver-embed-certs-20220512231813-516044" [0a16efda-bac5-4432-8adf-6fd5ebc0267a] Running
	I0512 23:32:39.214141  770898 system_pods.go:61] "kube-controller-manager-embed-certs-20220512231813-516044" [544284a8-77cf-480e-9860-ac60b37810bc] Running
	I0512 23:32:39.214145  770898 system_pods.go:61] "kube-proxy-thpfx" [a4570809-edf5-49ba-9973-417a32f66e0e] Running
	I0512 23:32:39.214151  770898 system_pods.go:61] "kube-scheduler-embed-certs-20220512231813-516044" [c1158183-e2cf-48cd-b6c1-13b5aa66f69b] Running
	I0512 23:32:39.214159  770898 system_pods.go:61] "metrics-server-b955d9d8-x295t" [bc8fa89e-0cc4-44b7-a83b-83a42d3ac9dc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0512 23:32:39.214171  770898 system_pods.go:61] "storage-provisioner" [043b802f-2325-4434-bf13-35dfc71b743e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0512 23:32:39.214186  770898 system_pods.go:74] duration metric: took 3.507880682s to wait for pod list to return data ...
	I0512 23:32:39.214205  770898 default_sa.go:34] waiting for default service account to be created ...
	I0512 23:32:39.216181  770898 default_sa.go:45] found service account: "default"
	I0512 23:32:39.216202  770898 default_sa.go:55] duration metric: took 1.990439ms for default service account to be created ...
	I0512 23:32:39.216210  770898 system_pods.go:116] waiting for k8s-apps to be running ...
	I0512 23:32:39.221501  770898 system_pods.go:86] 8 kube-system pods found
	I0512 23:32:39.221536  770898 system_pods.go:89] "coredns-64897985d-zcth8" [31142980-1191-40da-b252-be5993499640] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0512 23:32:39.221547  770898 system_pods.go:89] "etcd-embed-certs-20220512231813-516044" [9732c40a-dab7-458c-bed0-7bf2d845dc6d] Running
	I0512 23:32:39.221556  770898 system_pods.go:89] "kube-apiserver-embed-certs-20220512231813-516044" [0a16efda-bac5-4432-8adf-6fd5ebc0267a] Running
	I0512 23:32:39.221571  770898 system_pods.go:89] "kube-controller-manager-embed-certs-20220512231813-516044" [544284a8-77cf-480e-9860-ac60b37810bc] Running
	I0512 23:32:39.221578  770898 system_pods.go:89] "kube-proxy-thpfx" [a4570809-edf5-49ba-9973-417a32f66e0e] Running
	I0512 23:32:39.221588  770898 system_pods.go:89] "kube-scheduler-embed-certs-20220512231813-516044" [c1158183-e2cf-48cd-b6c1-13b5aa66f69b] Running
	I0512 23:32:39.221599  770898 system_pods.go:89] "metrics-server-b955d9d8-x295t" [bc8fa89e-0cc4-44b7-a83b-83a42d3ac9dc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0512 23:32:39.221614  770898 system_pods.go:89] "storage-provisioner" [043b802f-2325-4434-bf13-35dfc71b743e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0512 23:32:39.221628  770898 system_pods.go:126] duration metric: took 5.412487ms to wait for k8s-apps to be running ...
	I0512 23:32:39.221640  770898 system_svc.go:44] waiting for kubelet service to be running ....
	I0512 23:32:39.221689  770898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 23:32:39.232305  770898 system_svc.go:56] duration metric: took 10.655146ms WaitForService to wait for kubelet.
	I0512 23:32:39.232327  770898 kubeadm.go:548] duration metric: took 4m16.83463904s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0512 23:32:39.232347  770898 node_conditions.go:102] verifying NodePressure condition ...
	I0512 23:32:39.234702  770898 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0512 23:32:39.234724  770898 node_conditions.go:123] node cpu capacity is 8
	I0512 23:32:39.234735  770898 node_conditions.go:105] duration metric: took 2.381387ms to run NodePressure ...
	I0512 23:32:39.234747  770898 start.go:213] waiting for startup goroutines ...
	I0512 23:32:39.281124  770898 start.go:504] kubectl: 1.24.0, cluster: 1.23.5 (minor skew: 1)
	I0512 23:32:39.283591  770898 out.go:177] * Done! kubectl is now configured to use "embed-certs-20220512231813-516044" cluster and "default" namespace by default
	I0512 23:32:34.729190  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:36.729835  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:38.730504  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:37.493752  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:39.992662  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:41.229145  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:43.729326  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:42.493628  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:44.993359  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:45.731148  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:48.229060  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:47.493754  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:49.992930  817261 pod_ready.go:102] pod "calico-node-wzwqd" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:50.229125  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	I0512 23:32:52.230365  826131 pod_ready.go:102] pod "weave-net-64z47" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-05-12 23:22:47 UTC, end at Thu 2022-05-12 23:32:55 UTC. --
	May 12 23:28:42 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:28:42.516692287Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	May 12 23:28:42 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:28:42.518645398Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	May 12 23:28:49 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:28:49.681944833Z" level=info msg="ignoring event" container=5ae88164da8743d0ea266531809a940b25b20b5dc9d03aa10a01b9b8d4f777d9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 23:28:56 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:28:56.798966700Z" level=info msg="ignoring event" container=4bdf19dc44ba3982e6b7d092fb2b8d517fc2b38d45b95b137ba21631783f51f9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 23:29:06 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:29:06.524305136Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	May 12 23:29:06 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:29:06.524344791Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	May 12 23:29:06 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:29:06.526318363Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	May 12 23:29:06 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:29:06.668155814Z" level=info msg="ignoring event" container=62f3808e68323f31410c8305ac0f29ca91bfd0eedc24943c78c036262fd3aa44 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 23:29:20 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:29:20.676875170Z" level=info msg="ignoring event" container=1ae619541c0ab1e85513fe2b1d8ebf84fa8796ff4941d302376783c3cbc8f1e7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 23:29:27 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:29:27.910416439Z" level=info msg="ignoring event" container=aaf62a45a647cda77bb07cf48c3c92d08e127963a8e5fadd3eda5b68dbbc76e4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 23:29:37 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:29:37.058659820Z" level=info msg="ignoring event" container=e9e0013894675a64d6afdb5230fd1851cb716cf69e5543179ca63b6c542a8275 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 23:30:00 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:30:00.529161449Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	May 12 23:30:00 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:30:00.529205312Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	May 12 23:30:00 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:30:00.530977396Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	May 12 23:30:04 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:30:04.653627006Z" level=info msg="ignoring event" container=1cd006e9a37de8029f40473dbd09b5495fd5bab08c9ca1a99097e9f6a7ba591c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 23:30:11 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:30:11.652843089Z" level=info msg="ignoring event" container=e18bc47f2141ef5ac368b630b274bb257a3d8daebf8ee7a0012ea08fa15a3a7b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 23:30:29 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:30:29.659515351Z" level=info msg="ignoring event" container=aef32dfa43a76433bc4eff66809e2ff36bdc4dba88a4d62d5af8dce6a0ca901a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 23:31:10 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:31:10.626186505Z" level=info msg="ignoring event" container=acbd1356496e29ffa881090ca61b1473f7e070165a16ca6ea4dbf94872f6a570 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 23:31:21 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:31:21.645658726Z" level=info msg="ignoring event" container=287730a8ff0d89a4653613d92f3a53fa63ec7365d94c73570ca27abae825ec5a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 23:31:25 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:31:25.646524763Z" level=info msg="ignoring event" container=25fb116460078d0b7ed3cbeddb11ad2187e0f6d716a1000cb1186c912db539c1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 23:31:30 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:31:30.529345687Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	May 12 23:31:30 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:31:30.529382145Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	May 12 23:31:30 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:31:30.531659400Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	May 12 23:32:30 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:32:30.653012562Z" level=info msg="ignoring event" container=727092ac44e3a68485ee527acbd975bddcbbcf9a615f6a9fb9d1b45924db17cb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 23:32:34 embed-certs-20220512231813-516044 dockerd[245]: time="2022-05-12T23:32:34.643944628Z" level=info msg="ignoring event" container=987dc4684b4bcd76c2376298fe9129259eab32b8a29bb7dae4fb7dd69e2f0973 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID
	987dc4684b4bc       7fff914c4a615       51 seconds ago       Exited              kubernetes-dashboard        4                   c6b7397b134d6
	727092ac44e3a       6e38f40d628db       55 seconds ago       Exited              storage-provisioner         4                   2902fe86f9b27
	25fb116460078       a90209bb39e3d       About a minute ago   Exited              dashboard-metrics-scraper   5                   3934a8affcd12
	aa5767628f6c8       a4ca41631cc7a       4 minutes ago        Running             coredns                     0                   8410ebedb8c21
	dc7bed8be1c34       3c53fa8541f95       4 minutes ago        Running             kube-proxy                  0                   9855422fb6fb4
	b2d43c18073be       884d49d6d8c9f       4 minutes ago        Running             kube-scheduler              2                   facb532e1da11
	48098a84d7fde       3fc1d62d65872       4 minutes ago        Running             kube-apiserver              2                   14cd5a47d5076
	900ff0eeacc6f       25f8c7f3da61c       4 minutes ago        Running             etcd                        2                   ee3e11268967c
	dd2291ed28a82       b0c9e5e4dbb14       4 minutes ago        Running             kube-controller-manager     2                   84fbd3ddee7b5
	
	* 
	* ==> coredns [aa5767628f6c] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220512231813-516044
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220512231813-516044
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5812f8ec06db4997111dc3269784a7f664662f05
	                    minikube.k8s.io/name=embed-certs-20220512231813-516044
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_12T23_28_08_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 May 2022 23:28:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220512231813-516044
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 May 2022 23:32:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 May 2022 23:28:39 +0000   Thu, 12 May 2022 23:28:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 May 2022 23:28:39 +0000   Thu, 12 May 2022 23:28:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 May 2022 23:28:39 +0000   Thu, 12 May 2022 23:28:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 May 2022 23:28:39 +0000   Thu, 12 May 2022 23:28:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    embed-certs-20220512231813-516044
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873824Ki
	  pods:               110
	System Info:
	  Machine ID:                 1729fd8b7c184ebda96a08181510f608
	  System UUID:                03c44298-fcaf-4873-a4e1-09e2c3009e1b
	  Boot ID:                    88a64cd6-2747-4e4a-a528-ec239b8b4bba
	  Kernel Version:             5.13.0-1025-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.15
	  Kubelet Version:            v1.23.5
	  Kube-Proxy Version:         v1.23.5
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-zcth8                                      100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m35s
	  kube-system                 etcd-embed-certs-20220512231813-516044                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m48s
	  kube-system                 kube-apiserver-embed-certs-20220512231813-516044             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  kube-system                 kube-controller-manager-embed-certs-20220512231813-516044    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  kube-system                 kube-proxy-thpfx                                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 kube-scheduler-embed-certs-20220512231813-516044             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  kube-system                 metrics-server-b955d9d8-x295t                                100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m31s
	  kube-system                 storage-provisioner                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-qnw7q                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kubernetes-dashboard        kubernetes-dashboard-8469778f77-6z6nx                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (1%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 4m31s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  4m56s (x4 over 4m56s)  kubelet     Node embed-certs-20220512231813-516044 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m56s (x4 over 4m56s)  kubelet     Node embed-certs-20220512231813-516044 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m56s (x4 over 4m56s)  kubelet     Node embed-certs-20220512231813-516044 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m48s                  kubelet     Node embed-certs-20220512231813-516044 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m48s                  kubelet     Node embed-certs-20220512231813-516044 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m48s                  kubelet     Node embed-certs-20220512231813-516044 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m48s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 4m48s                  kubelet     Starting kubelet.
	  Normal  NodeReady                4m38s                  kubelet     Node embed-certs-20220512231813-516044 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 8d 85 23 32 1b 08 06
	[  +2.823721] IPv4: martian source 10.85.0.92 from 10.85.0.92, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a fa 21 4b 5f 30 08 06
	[  +2.486921] IPv4: martian source 10.85.0.93 from 10.85.0.93, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a b7 5a d7 25 ce 08 06
	[  +2.823208] IPv4: martian source 10.85.0.94 from 10.85.0.94, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 30 c7 b4 d5 50 08 06
	[  +2.957810] IPv4: martian source 10.85.0.95 from 10.85.0.95, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a e6 b0 55 2d 23 08 06
	[  +2.955499] IPv4: martian source 10.85.0.96 from 10.85.0.96, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e d9 04 40 72 80 08 06
	[  +2.356634] IPv4: martian source 10.244.0.124 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e ff 16 ff 6f 35 08 06
	[  +0.523560] IPv4: martian source 10.85.0.97 from 10.85.0.97, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 4f 5a 78 6f dc 08 06
	[  +0.495932] IPv4: martian source 10.244.0.124 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e ff 16 ff 6f 35 08 06
	[  +1.023924] IPv4: martian source 10.244.0.124 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e ff 16 ff 6f 35 08 06
	[  +1.647979] IPv4: martian source 10.85.0.98 from 10.85.0.98, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 03 f2 1a 44 6b 08 06
	[  +1.308042] IPv4: martian source 10.244.0.124 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1e ff 16 ff 6f 35 08 06
	[  +1.011660] IPv4: martian source 10.244.0.124 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1e ff 16 ff 6f 35 08 06
	
	* 
	* ==> etcd [900ff0eeacc6] <==
	* {"level":"info","ts":"2022-05-12T23:28:25.229Z","caller":"traceutil/trace.go:171","msg":"trace[1592263554] linearizableReadLoop","detail":"{readStateIndex:474; appliedIndex:472; }","duration":"220.560715ms","start":"2022-05-12T23:28:25.009Z","end":"2022-05-12T23:28:25.229Z","steps":["trace[1592263554] 'read index received'  (duration: 100.451579ms)","trace[1592263554] 'applied index is now lower than readState.Index'  (duration: 120.107872ms)"],"step_count":2}
	{"level":"info","ts":"2022-05-12T23:28:25.229Z","caller":"traceutil/trace.go:171","msg":"trace[1616192255] transaction","detail":"{read_only:false; response_revision:459; number_of_response:1; }","duration":"221.425168ms","start":"2022-05-12T23:28:25.008Z","end":"2022-05-12T23:28:25.229Z","steps":["trace[1616192255] 'process raft request'  (duration: 101.091479ms)","trace[1616192255] 'compare'  (duration: 119.662709ms)"],"step_count":2}
	{"level":"info","ts":"2022-05-12T23:28:25.230Z","caller":"traceutil/trace.go:171","msg":"trace[1683575468] transaction","detail":"{read_only:false; response_revision:460; number_of_response:1; }","duration":"221.616422ms","start":"2022-05-12T23:28:25.008Z","end":"2022-05-12T23:28:25.230Z","steps":["trace[1683575468] 'process raft request'  (duration: 220.927227ms)"],"step_count":1}
	{"level":"info","ts":"2022-05-12T23:28:25.230Z","caller":"traceutil/trace.go:171","msg":"trace[610901355] transaction","detail":"{read_only:false; response_revision:461; number_of_response:1; }","duration":"219.970754ms","start":"2022-05-12T23:28:25.010Z","end":"2022-05-12T23:28:25.230Z","steps":["trace[610901355] 'process raft request'  (duration: 219.112986ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-12T23:28:25.230Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"221.57052ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/kubernetes-dashboard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-05-12T23:28:25.230Z","caller":"traceutil/trace.go:171","msg":"trace[800546737] range","detail":"{range_begin:/registry/clusterrolebindings/kubernetes-dashboard; range_end:; response_count:0; response_revision:461; }","duration":"221.605798ms","start":"2022-05-12T23:28:25.008Z","end":"2022-05-12T23:28:25.230Z","steps":["trace[800546737] 'agreement among raft nodes before linearized reading'  (duration: 221.520387ms)"],"step_count":1}
	{"level":"info","ts":"2022-05-12T23:28:25.539Z","caller":"traceutil/trace.go:171","msg":"trace[1580646413] linearizableReadLoop","detail":"{readStateIndex:477; appliedIndex:476; }","duration":"303.065119ms","start":"2022-05-12T23:28:25.236Z","end":"2022-05-12T23:28:25.539Z","steps":["trace[1580646413] 'read index received'  (duration: 251.236621ms)","trace[1580646413] 'applied index is now lower than readState.Index'  (duration: 51.827693ms)"],"step_count":2}
	{"level":"info","ts":"2022-05-12T23:28:25.539Z","caller":"traceutil/trace.go:171","msg":"trace[1197618302] transaction","detail":"{read_only:false; response_revision:464; number_of_response:1; }","duration":"302.08302ms","start":"2022-05-12T23:28:25.237Z","end":"2022-05-12T23:28:25.539Z","steps":["trace[1197618302] 'process raft request'  (duration: 301.990186ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-12T23:28:25.539Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"303.270868ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/kube-system/system:persistent-volume-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2022-05-12T23:28:25.539Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T23:28:25.237Z","time spent":"302.161226ms","remote":"127.0.0.1:41872","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3041,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/metrics-server-b955d9d8\" mod_revision:0 > success:<request_put:<key:\"/registry/replicasets/kube-system/metrics-server-b955d9d8\" value_size:2976 >> failure:<>"}
	{"level":"info","ts":"2022-05-12T23:28:25.539Z","caller":"traceutil/trace.go:171","msg":"trace[921791399] range","detail":"{range_begin:/registry/rolebindings/kube-system/system:persistent-volume-provisioner; range_end:; response_count:0; response_revision:464; }","duration":"303.330606ms","start":"2022-05-12T23:28:25.236Z","end":"2022-05-12T23:28:25.539Z","steps":["trace[921791399] 'agreement among raft nodes before linearized reading'  (duration: 303.220603ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-12T23:28:25.539Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T23:28:25.236Z","time spent":"303.376861ms","remote":"127.0.0.1:41838","response type":"/etcdserverpb.KV/Range","request count":0,"request size":73,"response count":0,"response size":29,"request content":"key:\"/registry/rolebindings/kube-system/system:persistent-volume-provisioner\" "}
	{"level":"warn","ts":"2022-05-12T23:28:25.539Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"302.345882ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-20220512231813-516044\" ","response":"range_response_count:1 size:4566"}
	{"level":"info","ts":"2022-05-12T23:28:25.539Z","caller":"traceutil/trace.go:171","msg":"trace[1379106560] transaction","detail":"{read_only:false; response_revision:463; number_of_response:1; }","duration":"303.399636ms","start":"2022-05-12T23:28:25.236Z","end":"2022-05-12T23:28:25.539Z","steps":["trace[1379106560] 'process raft request'  (duration: 251.3211ms)","trace[1379106560] 'compare'  (duration: 51.63098ms)"],"step_count":2}
	{"level":"info","ts":"2022-05-12T23:28:25.539Z","caller":"traceutil/trace.go:171","msg":"trace[1061428158] range","detail":"{range_begin:/registry/minions/embed-certs-20220512231813-516044; range_end:; response_count:1; response_revision:464; }","duration":"302.372231ms","start":"2022-05-12T23:28:25.237Z","end":"2022-05-12T23:28:25.539Z","steps":["trace[1061428158] 'agreement among raft nodes before linearized reading'  (duration: 302.323786ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-12T23:28:25.539Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T23:28:25.237Z","time spent":"302.401755ms","remote":"127.0.0.1:41782","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":4590,"request content":"key:\"/registry/minions/embed-certs-20220512231813-516044\" "}
	{"level":"warn","ts":"2022-05-12T23:28:25.539Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"303.615622ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/metrics-server\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2022-05-12T23:28:25.539Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"301.843757ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kubernetes-dashboard/kubernetes-dashboard-settings\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2022-05-12T23:28:25.539Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"156.283983ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2022-05-12T23:28:25.539Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T23:28:25.236Z","time spent":"303.47453ms","remote":"127.0.0.1:41786","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":220,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/serviceaccounts/kubernetes-dashboard/default\" mod_revision:455 > success:<request_put:<key:\"/registry/serviceaccounts/kubernetes-dashboard/default\" value_size:158 >> failure:<request_range:<key:\"/registry/serviceaccounts/kubernetes-dashboard/default\" > >"}
	{"level":"info","ts":"2022-05-12T23:28:25.539Z","caller":"traceutil/trace.go:171","msg":"trace[1693590207] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:464; }","duration":"156.314569ms","start":"2022-05-12T23:28:25.383Z","end":"2022-05-12T23:28:25.539Z","steps":["trace[1693590207] 'agreement among raft nodes before linearized reading'  (duration: 156.253409ms)"],"step_count":1}
	{"level":"info","ts":"2022-05-12T23:28:25.539Z","caller":"traceutil/trace.go:171","msg":"trace[599143980] range","detail":"{range_begin:/registry/configmaps/kubernetes-dashboard/kubernetes-dashboard-settings; range_end:; response_count:0; response_revision:464; }","duration":"301.881115ms","start":"2022-05-12T23:28:25.237Z","end":"2022-05-12T23:28:25.539Z","steps":["trace[599143980] 'agreement among raft nodes before linearized reading'  (duration: 301.821401ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-12T23:28:25.539Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T23:28:25.237Z","time spent":"302.012007ms","remote":"127.0.0.1:41774","response type":"/etcdserverpb.KV/Range","request count":0,"request size":73,"response count":0,"response size":29,"request content":"key:\"/registry/configmaps/kubernetes-dashboard/kubernetes-dashboard-settings\" "}
	{"level":"info","ts":"2022-05-12T23:28:25.539Z","caller":"traceutil/trace.go:171","msg":"trace[1907040849] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/metrics-server; range_end:; response_count:0; response_revision:464; }","duration":"303.640192ms","start":"2022-05-12T23:28:25.236Z","end":"2022-05-12T23:28:25.539Z","steps":["trace[1907040849] 'agreement among raft nodes before linearized reading'  (duration: 303.600301ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-12T23:28:25.540Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T23:28:25.236Z","time spent":"303.869655ms","remote":"127.0.0.1:41786","response type":"/etcdserverpb.KV/Range","request count":0,"request size":54,"response count":0,"response size":29,"request content":"key:\"/registry/serviceaccounts/kube-system/metrics-server\" "}
	
	* 
	* ==> kernel <==
	*  23:32:56 up  6:15,  0 users,  load average: 4.67, 5.59, 4.32
	Linux embed-certs-20220512231813-516044 5.13.0-1025-gcp #30~20.04.1-Ubuntu SMP Tue Apr 26 03:01:25 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [48098a84d7fd] <==
	* I0512 23:28:06.625727       1 controller.go:611] quota admission added evaluator for: endpoints
	I0512 23:28:06.636720       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0512 23:28:07.306937       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0512 23:28:07.889748       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0512 23:28:07.900543       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0512 23:28:07.917355       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0512 23:28:08.331923       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0512 23:28:20.722343       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0512 23:28:21.191351       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0512 23:28:24.093678       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0512 23:28:25.792834       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.99.236.68]
	W0512 23:28:25.914378       1 handler_proxy.go:104] no RequestInfo found in the context
	E0512 23:28:25.914455       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0512 23:28:25.914464       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0512 23:28:25.990545       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.109.187.12]
	I0512 23:28:26.028007       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.96.240.35]
	W0512 23:29:25.915318       1 handler_proxy.go:104] no RequestInfo found in the context
	E0512 23:29:25.915405       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0512 23:29:25.915423       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0512 23:31:25.916269       1 handler_proxy.go:104] no RequestInfo found in the context
	E0512 23:31:25.916372       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0512 23:31:25.916392       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [dd2291ed28a8] <==
	* I0512 23:28:25.704503       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0512 23:28:25.779229       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0512 23:28:25.779617       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0512 23:28:25.787394       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0512 23:28:25.787394       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0512 23:28:25.796236       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-qnw7q"
	I0512 23:28:25.818216       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-6z6nx"
	E0512 23:28:50.391723       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0512 23:28:50.836272       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0512 23:29:20.408782       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0512 23:29:20.853435       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0512 23:29:50.426456       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0512 23:29:50.868820       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0512 23:30:20.445543       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0512 23:30:20.890124       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0512 23:30:50.462458       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0512 23:30:50.907058       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0512 23:31:20.478855       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0512 23:31:20.925774       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0512 23:31:50.503627       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0512 23:31:50.942929       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0512 23:32:20.523487       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0512 23:32:20.957993       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0512 23:32:50.538532       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0512 23:32:50.972919       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [dc7bed8be1c3] <==
	* I0512 23:28:23.579567       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0512 23:28:23.579697       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0512 23:28:23.579757       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0512 23:28:24.089347       1 server_others.go:206] "Using iptables Proxier"
	I0512 23:28:24.089384       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0512 23:28:24.089394       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0512 23:28:24.089420       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0512 23:28:24.089786       1 server.go:656] "Version info" version="v1.23.5"
	I0512 23:28:24.090624       1 config.go:226] "Starting endpoint slice config controller"
	I0512 23:28:24.090661       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0512 23:28:24.090683       1 config.go:317] "Starting service config controller"
	I0512 23:28:24.090688       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0512 23:28:24.191510       1 shared_informer.go:247] Caches are synced for service config 
	I0512 23:28:24.191529       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [b2d43c18073b] <==
	* W0512 23:28:05.290979       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0512 23:28:05.291117       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0512 23:28:05.291368       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0512 23:28:05.291524       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0512 23:28:05.291842       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0512 23:28:05.291983       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0512 23:28:05.293109       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0512 23:28:05.293143       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0512 23:28:05.293708       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0512 23:28:05.293776       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0512 23:28:06.174717       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0512 23:28:06.174772       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0512 23:28:06.174847       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0512 23:28:06.174935       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0512 23:28:06.184356       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0512 23:28:06.184390       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0512 23:28:06.190550       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0512 23:28:06.190582       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0512 23:28:06.232668       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0512 23:28:06.232701       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0512 23:28:06.257839       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0512 23:28:06.257883       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0512 23:28:06.378295       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0512 23:28:06.378335       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0512 23:28:08.277321       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-05-12 23:22:47 UTC, end at Thu 2022-05-12 23:32:56 UTC. --
	May 12 23:32:14 embed-certs-20220512231813-516044 kubelet[4970]: I0512 23:32:14.490954    4970 scope.go:110] "RemoveContainer" containerID="25fb116460078d0b7ed3cbeddb11ad2187e0f6d716a1000cb1186c912db539c1"
	May 12 23:32:14 embed-certs-20220512231813-516044 kubelet[4970]: E0512 23:32:14.491338    4970 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-56974995fc-qnw7q_kubernetes-dashboard(e88272f4-f193-4eba-91a1-fa966d4b7483)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-qnw7q" podUID=e88272f4-f193-4eba-91a1-fa966d4b7483
	May 12 23:32:22 embed-certs-20220512231813-516044 kubelet[4970]: E0512 23:32:22.492562    4970 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-b955d9d8-x295t" podUID=bc8fa89e-0cc4-44b7-a83b-83a42d3ac9dc
	May 12 23:32:28 embed-certs-20220512231813-516044 kubelet[4970]: I0512 23:32:28.492239    4970 scope.go:110] "RemoveContainer" containerID="25fb116460078d0b7ed3cbeddb11ad2187e0f6d716a1000cb1186c912db539c1"
	May 12 23:32:28 embed-certs-20220512231813-516044 kubelet[4970]: E0512 23:32:28.492638    4970 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-56974995fc-qnw7q_kubernetes-dashboard(e88272f4-f193-4eba-91a1-fa966d4b7483)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-qnw7q" podUID=e88272f4-f193-4eba-91a1-fa966d4b7483
	May 12 23:32:31 embed-certs-20220512231813-516044 kubelet[4970]: I0512 23:32:31.103226    4970 scope.go:110] "RemoveContainer" containerID="acbd1356496e29ffa881090ca61b1473f7e070165a16ca6ea4dbf94872f6a570"
	May 12 23:32:31 embed-certs-20220512231813-516044 kubelet[4970]: I0512 23:32:31.103614    4970 scope.go:110] "RemoveContainer" containerID="727092ac44e3a68485ee527acbd975bddcbbcf9a615f6a9fb9d1b45924db17cb"
	May 12 23:32:31 embed-certs-20220512231813-516044 kubelet[4970]: E0512 23:32:31.103873    4970 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(043b802f-2325-4434-bf13-35dfc71b743e)\"" pod="kube-system/storage-provisioner" podUID=043b802f-2325-4434-bf13-35dfc71b743e
	May 12 23:32:35 embed-certs-20220512231813-516044 kubelet[4970]: I0512 23:32:35.134874    4970 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-8469778f77-6z6nx through plugin: invalid network status for"
	May 12 23:32:35 embed-certs-20220512231813-516044 kubelet[4970]: I0512 23:32:35.139893    4970 scope.go:110] "RemoveContainer" containerID="287730a8ff0d89a4653613d92f3a53fa63ec7365d94c73570ca27abae825ec5a"
	May 12 23:32:35 embed-certs-20220512231813-516044 kubelet[4970]: I0512 23:32:35.140242    4970 scope.go:110] "RemoveContainer" containerID="987dc4684b4bcd76c2376298fe9129259eab32b8a29bb7dae4fb7dd69e2f0973"
	May 12 23:32:35 embed-certs-20220512231813-516044 kubelet[4970]: E0512 23:32:35.140585    4970 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-8469778f77-6z6nx_kubernetes-dashboard(6ffbcd0f-ff86-4fbc-906e-472268aebcf5)\"" pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-6z6nx" podUID=6ffbcd0f-ff86-4fbc-906e-472268aebcf5
	May 12 23:32:35 embed-certs-20220512231813-516044 kubelet[4970]: E0512 23:32:35.492587    4970 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-b955d9d8-x295t" podUID=bc8fa89e-0cc4-44b7-a83b-83a42d3ac9dc
	May 12 23:32:36 embed-certs-20220512231813-516044 kubelet[4970]: I0512 23:32:36.147577    4970 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-8469778f77-6z6nx through plugin: invalid network status for"
	May 12 23:32:36 embed-certs-20220512231813-516044 kubelet[4970]: I0512 23:32:36.150662    4970 scope.go:110] "RemoveContainer" containerID="987dc4684b4bcd76c2376298fe9129259eab32b8a29bb7dae4fb7dd69e2f0973"
	May 12 23:32:36 embed-certs-20220512231813-516044 kubelet[4970]: E0512 23:32:36.151012    4970 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-8469778f77-6z6nx_kubernetes-dashboard(6ffbcd0f-ff86-4fbc-906e-472268aebcf5)\"" pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-6z6nx" podUID=6ffbcd0f-ff86-4fbc-906e-472268aebcf5
	May 12 23:32:40 embed-certs-20220512231813-516044 kubelet[4970]: I0512 23:32:40.491687    4970 scope.go:110] "RemoveContainer" containerID="25fb116460078d0b7ed3cbeddb11ad2187e0f6d716a1000cb1186c912db539c1"
	May 12 23:32:40 embed-certs-20220512231813-516044 kubelet[4970]: E0512 23:32:40.492096    4970 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-56974995fc-qnw7q_kubernetes-dashboard(e88272f4-f193-4eba-91a1-fa966d4b7483)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-qnw7q" podUID=e88272f4-f193-4eba-91a1-fa966d4b7483
	May 12 23:32:46 embed-certs-20220512231813-516044 kubelet[4970]: I0512 23:32:46.492275    4970 scope.go:110] "RemoveContainer" containerID="727092ac44e3a68485ee527acbd975bddcbbcf9a615f6a9fb9d1b45924db17cb"
	May 12 23:32:46 embed-certs-20220512231813-516044 kubelet[4970]: E0512 23:32:46.492532    4970 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(043b802f-2325-4434-bf13-35dfc71b743e)\"" pod="kube-system/storage-provisioner" podUID=043b802f-2325-4434-bf13-35dfc71b743e
	May 12 23:32:49 embed-certs-20220512231813-516044 kubelet[4970]: E0512 23:32:49.493141    4970 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-b955d9d8-x295t" podUID=bc8fa89e-0cc4-44b7-a83b-83a42d3ac9dc
	May 12 23:32:51 embed-certs-20220512231813-516044 kubelet[4970]: I0512 23:32:51.491160    4970 scope.go:110] "RemoveContainer" containerID="987dc4684b4bcd76c2376298fe9129259eab32b8a29bb7dae4fb7dd69e2f0973"
	May 12 23:32:51 embed-certs-20220512231813-516044 kubelet[4970]: I0512 23:32:51.491308    4970 scope.go:110] "RemoveContainer" containerID="25fb116460078d0b7ed3cbeddb11ad2187e0f6d716a1000cb1186c912db539c1"
	May 12 23:32:51 embed-certs-20220512231813-516044 kubelet[4970]: E0512 23:32:51.491567    4970 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-8469778f77-6z6nx_kubernetes-dashboard(6ffbcd0f-ff86-4fbc-906e-472268aebcf5)\"" pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-6z6nx" podUID=6ffbcd0f-ff86-4fbc-906e-472268aebcf5
	May 12 23:32:51 embed-certs-20220512231813-516044 kubelet[4970]: E0512 23:32:51.491617    4970 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-56974995fc-qnw7q_kubernetes-dashboard(e88272f4-f193-4eba-91a1-fa966d4b7483)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-qnw7q" podUID=e88272f4-f193-4eba-91a1-fa966d4b7483
	
	* 
	* ==> kubernetes-dashboard [987dc4684b4b] <==
	* 2022/05/12 23:32:04 Starting overwatch
	panic: Get "https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf": dial tcp 10.96.0.1:443: i/o timeout
	
	goroutine 1 [running]:
	github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc0005dfaf0)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:41 +0x30e
	github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:66
	github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc0001dcc00)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:527 +0x94
	github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0x194fa64)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:495 +0x32
	github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:594
	main.main()
		/home/runner/work/dashboard/dashboard/src/app/backend/dashboard.go:95 +0x1cf
	2022/05/12 23:32:04 Using namespace: kubernetes-dashboard
	2022/05/12 23:32:04 Using in-cluster config to connect to apiserver
	2022/05/12 23:32:04 Using secret token for csrf signing
	2022/05/12 23:32:04 Initializing csrf token from kubernetes-dashboard-csrf secret
	
	* 
	* ==> storage-provisioner [727092ac44e3] <==
	* I0512 23:32:00.633013       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0512 23:32:30.636455       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220512231813-516044 -n embed-certs-20220512231813-516044
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220512231813-516044 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-b955d9d8-x295t
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220512231813-516044 describe pod metrics-server-b955d9d8-x295t
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220512231813-516044 describe pod metrics-server-b955d9d8-x295t: exit status 1 (64.717913ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-b955d9d8-x295t" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220512231813-516044 describe pod metrics-server-b955d9d8-x295t: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (276.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20220512231715-516044 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=docker
E0512 23:33:01.310363  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/false-20220512231715-516044/client.crt: no such file or directory
E0512 23:33:03.216486  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/cilium-20220512231715-516044/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kindnet-20220512231715-516044 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=docker: exit status 80 (4m36.66247467s)

                                                
                                                
-- stdout --
	* [kindnet-20220512231715-516044] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	* Using Docker driver with the root privilege
	* Starting control plane node kindnet-20220512231715-516044 in cluster kindnet-20220512231715-516044
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.5 on Docker 20.10.15 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0512 23:33:00.734386  973037 out.go:296] Setting OutFile to fd 1 ...
	I0512 23:33:00.734598  973037 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 23:33:00.734605  973037 out.go:309] Setting ErrFile to fd 2...
	I0512 23:33:00.734613  973037 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 23:33:00.734760  973037 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/bin
	I0512 23:33:00.735088  973037 out.go:303] Setting JSON to false
	I0512 23:33:00.736824  973037 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":22537,"bootTime":1652375844,"procs":710,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0512 23:33:00.736918  973037 start.go:125] virtualization: kvm guest
	I0512 23:33:00.739679  973037 out.go:177] * [kindnet-20220512231715-516044] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0512 23:33:00.741322  973037 out.go:177]   - MINIKUBE_LOCATION=12739
	I0512 23:33:00.741284  973037 notify.go:193] Checking for updates...
	I0512 23:33:00.744135  973037 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0512 23:33:00.745582  973037 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/kubeconfig
	I0512 23:33:00.747246  973037 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube
	I0512 23:33:00.748945  973037 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0512 23:33:00.750804  973037 config.go:178] Loaded profile config "calico-20220512231715-516044": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 23:33:00.750895  973037 config.go:178] Loaded profile config "custom-weave-20220512231715-516044": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 23:33:00.750969  973037 config.go:178] Loaded profile config "enable-default-cni-20220512231715-516044": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 23:33:00.751032  973037 driver.go:358] Setting default libvirt URI to qemu:///system
	I0512 23:33:00.798833  973037 docker.go:137] docker version: linux-20.10.16
	I0512 23:33:00.798942  973037 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 23:33:00.919417  973037 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:93 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:49 SystemTime:2022-05-12 23:33:00.833974788 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0512 23:33:00.919523  973037 docker.go:254] overlay module found
	I0512 23:33:00.921611  973037 out.go:177] * Using the docker driver based on user configuration
	I0512 23:33:00.922803  973037 start.go:284] selected driver: docker
	I0512 23:33:00.922818  973037 start.go:806] validating driver "docker" against <nil>
	I0512 23:33:00.922842  973037 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0512 23:33:00.925949  973037 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 23:33:01.046041  973037 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:93 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:49 SystemTime:2022-05-12 23:33:00.957694013 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0512 23:33:01.046174  973037 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0512 23:33:01.046345  973037 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0512 23:33:01.048282  973037 out.go:177] * Using Docker driver with the root privilege
	I0512 23:33:01.049555  973037 cni.go:95] Creating CNI manager for "kindnet"
	I0512 23:33:01.049589  973037 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0512 23:33:01.049595  973037 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0512 23:33:01.049601  973037 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0512 23:33:01.049621  973037 start_flags.go:306] config:
	{Name:kindnet-20220512231715-516044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:kindnet-20220512231715-516044 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0512 23:33:01.051080  973037 out.go:177] * Starting control plane node kindnet-20220512231715-516044 in cluster kindnet-20220512231715-516044
	I0512 23:33:01.052227  973037 cache.go:120] Beginning downloading kic base image for docker with docker
	I0512 23:33:01.053390  973037 out.go:177] * Pulling base image ...
	I0512 23:33:01.054515  973037 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 23:33:01.054557  973037 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0512 23:33:01.054573  973037 cache.go:57] Caching tarball of preloaded images
	I0512 23:33:01.054603  973037 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0512 23:33:01.054817  973037 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0512 23:33:01.054836  973037 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0512 23:33:01.054974  973037 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/kindnet-20220512231715-516044/config.json ...
	I0512 23:33:01.055001  973037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/kindnet-20220512231715-516044/config.json: {Name:mk2eb8b5b1ff11fd1b6698c60ce20ddf183267a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:33:01.110374  973037 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon, skipping pull
	I0512 23:33:01.110400  973037 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in daemon, skipping load
	I0512 23:33:01.110416  973037 cache.go:206] Successfully downloaded all kic artifacts
	I0512 23:33:01.110456  973037 start.go:352] acquiring machines lock for kindnet-20220512231715-516044: {Name:mk3712d2d600d38bf2c8f700a5144d4f85c77fbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0512 23:33:01.110616  973037 start.go:356] acquired machines lock for "kindnet-20220512231715-516044" in 129.072µs
	I0512 23:33:01.110652  973037 start.go:91] Provisioning new machine with config: &{Name:kindnet-20220512231715-516044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:kindnet-20220512231715-516044 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0512 23:33:01.110803  973037 start.go:131] createHost starting for "" (driver="docker")
	I0512 23:33:01.114155  973037 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0512 23:33:01.114426  973037 start.go:165] libmachine.API.Create for "kindnet-20220512231715-516044" (driver="docker")
	I0512 23:33:01.114471  973037 client.go:168] LocalClient.Create starting
	I0512 23:33:01.114556  973037 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca.pem
	I0512 23:33:01.114601  973037 main.go:134] libmachine: Decoding PEM data...
	I0512 23:33:01.114626  973037 main.go:134] libmachine: Parsing certificate...
	I0512 23:33:01.114702  973037 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/cert.pem
	I0512 23:33:01.114730  973037 main.go:134] libmachine: Decoding PEM data...
	I0512 23:33:01.114750  973037 main.go:134] libmachine: Parsing certificate...
	I0512 23:33:01.115139  973037 cli_runner.go:164] Run: docker network inspect kindnet-20220512231715-516044 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0512 23:33:01.159594  973037 cli_runner.go:211] docker network inspect kindnet-20220512231715-516044 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0512 23:33:01.159698  973037 network_create.go:272] running [docker network inspect kindnet-20220512231715-516044] to gather additional debugging logs...
	I0512 23:33:01.159724  973037 cli_runner.go:164] Run: docker network inspect kindnet-20220512231715-516044
	W0512 23:33:01.199916  973037 cli_runner.go:211] docker network inspect kindnet-20220512231715-516044 returned with exit code 1
	I0512 23:33:01.199953  973037 network_create.go:275] error running [docker network inspect kindnet-20220512231715-516044]: docker network inspect kindnet-20220512231715-516044: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-20220512231715-516044
	I0512 23:33:01.199971  973037 network_create.go:277] output of [docker network inspect kindnet-20220512231715-516044]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-20220512231715-516044
	
	** /stderr **
	I0512 23:33:01.200021  973037 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0512 23:33:01.242551  973037 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-43829243746f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:13:6e:f4:7c}}
	I0512 23:33:01.243224  973037 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-a47a71a1979b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:ed:8a:ea:57}}
	I0512 23:33:01.243819  973037 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc000010370] misses:0}
	I0512 23:33:01.243852  973037 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 23:33:01.243864  973037 network_create.go:115] attempt to create docker network kindnet-20220512231715-516044 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0512 23:33:01.243908  973037 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220512231715-516044
	I0512 23:33:01.326530  973037 network_create.go:99] docker network kindnet-20220512231715-516044 192.168.67.0/24 created
	I0512 23:33:01.326565  973037 kic.go:106] calculated static IP "192.168.67.2" for the "kindnet-20220512231715-516044" container
	I0512 23:33:01.326625  973037 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0512 23:33:01.364923  973037 cli_runner.go:164] Run: docker volume create kindnet-20220512231715-516044 --label name.minikube.sigs.k8s.io=kindnet-20220512231715-516044 --label created_by.minikube.sigs.k8s.io=true
	I0512 23:33:01.402411  973037 oci.go:103] Successfully created a docker volume kindnet-20220512231715-516044
	I0512 23:33:01.402502  973037 cli_runner.go:164] Run: docker run --rm --name kindnet-20220512231715-516044-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220512231715-516044 --entrypoint /usr/bin/test -v kindnet-20220512231715-516044:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c -d /var/lib
	I0512 23:33:02.075683  973037 oci.go:107] Successfully prepared a docker volume kindnet-20220512231715-516044
	I0512 23:33:02.075753  973037 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 23:33:02.075783  973037 kic.go:179] Starting extracting preloaded images to volume ...
	I0512 23:33:02.075861  973037 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20220512231715-516044:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c -I lz4 -xf /preloaded.tar -C /extractDir
	I0512 23:33:06.993083  973037 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20220512231715-516044:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c -I lz4 -xf /preloaded.tar -C /extractDir: (4.917164624s)
	I0512 23:33:06.993148  973037 kic.go:188] duration metric: took 4.917360 seconds to extract preloaded images to volume
	W0512 23:33:06.993322  973037 cgroups_linux.go:88] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0512 23:33:06.993439  973037 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0512 23:33:07.119933  973037 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-20220512231715-516044 --name kindnet-20220512231715-516044 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220512231715-516044 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-20220512231715-516044 --network kindnet-20220512231715-516044 --ip 192.168.67.2 --volume kindnet-20220512231715-516044:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c
	I0512 23:33:07.548799  973037 cli_runner.go:164] Run: docker container inspect kindnet-20220512231715-516044 --format={{.State.Running}}
	I0512 23:33:07.591562  973037 cli_runner.go:164] Run: docker container inspect kindnet-20220512231715-516044 --format={{.State.Status}}
	I0512 23:33:07.627311  973037 cli_runner.go:164] Run: docker exec kindnet-20220512231715-516044 stat /var/lib/dpkg/alternatives/iptables
	I0512 23:33:07.715159  973037 oci.go:144] the created container "kindnet-20220512231715-516044" has a running status.
	I0512 23:33:07.715203  973037 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/kindnet-20220512231715-516044/id_rsa...
	I0512 23:33:07.766831  973037 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/kindnet-20220512231715-516044/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0512 23:33:07.864788  973037 cli_runner.go:164] Run: docker container inspect kindnet-20220512231715-516044 --format={{.State.Status}}
	I0512 23:33:07.902537  973037 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0512 23:33:07.902572  973037 kic_runner.go:114] Args: [docker exec --privileged kindnet-20220512231715-516044 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0512 23:33:07.994107  973037 cli_runner.go:164] Run: docker container inspect kindnet-20220512231715-516044 --format={{.State.Status}}
	I0512 23:33:08.037179  973037 machine.go:88] provisioning docker machine ...
	I0512 23:33:08.037227  973037 ubuntu.go:169] provisioning hostname "kindnet-20220512231715-516044"
	I0512 23:33:08.037290  973037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220512231715-516044
	I0512 23:33:08.086979  973037 main.go:134] libmachine: Using SSH client type: native
	I0512 23:33:08.087246  973037 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49452 <nil> <nil>}
	I0512 23:33:08.087281  973037 main.go:134] libmachine: About to run SSH command:
	sudo hostname kindnet-20220512231715-516044 && echo "kindnet-20220512231715-516044" | sudo tee /etc/hostname
	I0512 23:33:08.243096  973037 main.go:134] libmachine: SSH cmd err, output: <nil>: kindnet-20220512231715-516044
	
	I0512 23:33:08.243190  973037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220512231715-516044
	I0512 23:33:08.283297  973037 main.go:134] libmachine: Using SSH client type: native
	I0512 23:33:08.283517  973037 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49452 <nil> <nil>}
	I0512 23:33:08.283554  973037 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-20220512231715-516044' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-20220512231715-516044/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-20220512231715-516044' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0512 23:33:08.456836  973037 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0512 23:33:08.456867  973037 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/key.pem
ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube}
	I0512 23:33:08.456912  973037 ubuntu.go:177] setting up certificates
	I0512 23:33:08.456934  973037 provision.go:83] configureAuth start
	I0512 23:33:08.456992  973037 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220512231715-516044
	I0512 23:33:08.495493  973037 provision.go:138] copyHostCerts
	I0512 23:33:08.495575  973037 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cert.pem, removing ...
	I0512 23:33:08.495592  973037 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cert.pem
	I0512 23:33:08.495684  973037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cert.pem (1123 bytes)
	I0512 23:33:08.495801  973037 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/key.pem, removing ...
	I0512 23:33:08.495821  973037 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/key.pem
	I0512 23:33:08.495899  973037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/key.pem (1675 bytes)
	I0512 23:33:08.496050  973037 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/ca.pem, removing ...
	I0512 23:33:08.496068  973037 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/ca.pem
	I0512 23:33:08.496113  973037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/ca.pem (1078 bytes)
	I0512 23:33:08.496284  973037 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca-key.pem org=jenkins.kindnet-20220512231715-516044 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kindnet-20220512231715-516044]
	I0512 23:33:08.816282  973037 provision.go:172] copyRemoteCerts
	I0512 23:33:08.816353  973037 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0512 23:33:08.816406  973037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220512231715-516044
	I0512 23:33:08.849503  973037 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49452 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/kindnet-20220512231715-516044/id_rsa Username:docker}
	I0512 23:33:08.949022  973037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0512 23:33:08.966657  973037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0512 23:33:08.983644  973037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0512 23:33:09.002132  973037 provision.go:86] duration metric: configureAuth took 545.180955ms
	I0512 23:33:09.002155  973037 ubuntu.go:193] setting minikube options for container-runtime
	I0512 23:33:09.002341  973037 config.go:178] Loaded profile config "kindnet-20220512231715-516044": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 23:33:09.002397  973037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220512231715-516044
	I0512 23:33:09.038975  973037 main.go:134] libmachine: Using SSH client type: native
	I0512 23:33:09.039158  973037 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49452 <nil> <nil>}
	I0512 23:33:09.039180  973037 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0512 23:33:09.169918  973037 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0512 23:33:09.169941  973037 ubuntu.go:71] root file system type: overlay
	I0512 23:33:09.170099  973037 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0512 23:33:09.170161  973037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220512231715-516044
	I0512 23:33:09.208174  973037 main.go:134] libmachine: Using SSH client type: native
	I0512 23:33:09.208379  973037 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49452 <nil> <nil>}
	I0512 23:33:09.208442  973037 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0512 23:33:09.356334  973037 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0512 23:33:09.356406  973037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220512231715-516044
	I0512 23:33:09.392257  973037 main.go:134] libmachine: Using SSH client type: native
	I0512 23:33:09.392434  973037 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil>  [] 0s} 127.0.0.1 49452 <nil> <nil>}
	I0512 23:33:09.392456  973037 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0512 23:33:10.105967  973037 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-05 13:17:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-12 23:33:09.351978262 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0512 23:33:10.106007  973037 machine.go:91] provisioned docker machine in 2.06880085s
	I0512 23:33:10.106019  973037 client.go:171] LocalClient.Create took 8.991537574s
	I0512 23:33:10.106037  973037 start.go:173] duration metric: libmachine.API.Create for "kindnet-20220512231715-516044" took 8.991612976s
	I0512 23:33:10.106054  973037 start.go:306] post-start starting for "kindnet-20220512231715-516044" (driver="docker")
	I0512 23:33:10.106062  973037 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0512 23:33:10.106125  973037 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0512 23:33:10.106179  973037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220512231715-516044
	I0512 23:33:10.138960  973037 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49452 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/kindnet-20220512231715-516044/id_rsa Username:docker}
	I0512 23:33:10.235005  973037 ssh_runner.go:195] Run: cat /etc/os-release
	I0512 23:33:10.238379  973037 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0512 23:33:10.238414  973037 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0512 23:33:10.238428  973037 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0512 23:33:10.238436  973037 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0512 23:33:10.238447  973037 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/addons for local assets ...
	I0512 23:33:10.238509  973037 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/files for local assets ...
	I0512 23:33:10.238644  973037 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/files/etc/ssl/certs/5160442.pem -> 5160442.pem in /etc/ssl/certs
	I0512 23:33:10.238767  973037 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0512 23:33:10.247750  973037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/files/etc/ssl/certs/5160442.pem --> /etc/ssl/certs/5160442.pem (1708 bytes)
	I0512 23:33:10.269882  973037 start.go:309] post-start completed in 163.810356ms
	I0512 23:33:10.270331  973037 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220512231715-516044
	I0512 23:33:10.312120  973037 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/kindnet-20220512231715-516044/config.json ...
	I0512 23:33:10.312409  973037 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0512 23:33:10.312460  973037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220512231715-516044
	I0512 23:33:10.350096  973037 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49452 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/kindnet-20220512231715-516044/id_rsa Username:docker}
	I0512 23:33:10.446226  973037 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0512 23:33:10.450512  973037 start.go:134] duration metric: createHost completed in 9.339686471s
	I0512 23:33:10.450540  973037 start.go:81] releasing machines lock for "kindnet-20220512231715-516044", held for 9.339906176s
	I0512 23:33:10.450633  973037 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220512231715-516044
	I0512 23:33:10.488108  973037 ssh_runner.go:195] Run: systemctl --version
	I0512 23:33:10.488171  973037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220512231715-516044
	I0512 23:33:10.488200  973037 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0512 23:33:10.488279  973037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220512231715-516044
	I0512 23:33:10.533608  973037 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49452 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/kindnet-20220512231715-516044/id_rsa Username:docker}
	I0512 23:33:10.535364  973037 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49452 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/kindnet-20220512231715-516044/id_rsa Username:docker}
	I0512 23:33:10.626486  973037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0512 23:33:10.658941  973037 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 23:33:10.669188  973037 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0512 23:33:10.669263  973037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0512 23:33:10.680245  973037 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0512 23:33:10.697900  973037 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0512 23:33:10.789796  973037 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0512 23:33:10.878190  973037 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 23:33:10.890450  973037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0512 23:33:10.977235  973037 ssh_runner.go:195] Run: sudo systemctl start docker
	I0512 23:33:10.987453  973037 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 23:33:11.032633  973037 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 23:33:11.075525  973037 out.go:204] * Preparing Kubernetes v1.23.5 on Docker 20.10.15 ...
	I0512 23:33:11.075613  973037 cli_runner.go:164] Run: docker network inspect kindnet-20220512231715-516044 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0512 23:33:11.111984  973037 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0512 23:33:11.115452  973037 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0512 23:33:11.128214  973037 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0512 23:33:11.129747  973037 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 23:33:11.129830  973037 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0512 23:33:11.168511  973037 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0512 23:33:11.168545  973037 docker.go:541] Images already preloaded, skipping extraction
	I0512 23:33:11.168603  973037 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0512 23:33:11.204111  973037 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0512 23:33:11.204151  973037 cache_images.go:84] Images are preloaded, skipping loading
	I0512 23:33:11.204225  973037 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0512 23:33:11.304368  973037 cni.go:95] Creating CNI manager for "kindnet"
	I0512 23:33:11.304416  973037 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0512 23:33:11.304432  973037 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-20220512231715-516044 NodeName:kindnet-20220512231715-516044 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/
lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0512 23:33:11.304558  973037 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kindnet-20220512231715-516044"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0512 23:33:11.304642  973037 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kindnet-20220512231715-516044 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:kindnet-20220512231715-516044 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:}
	I0512 23:33:11.304693  973037 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0512 23:33:11.347391  973037 binaries.go:44] Found k8s binaries, skipping transfer
	I0512 23:33:11.347480  973037 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0512 23:33:11.357423  973037 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (407 bytes)
	I0512 23:33:11.373148  973037 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0512 23:33:11.390254  973037 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2051 bytes)
	I0512 23:33:11.408922  973037 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0512 23:33:11.413018  973037 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0512 23:33:11.424513  973037 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/kindnet-20220512231715-516044 for IP: 192.168.67.2
	I0512 23:33:11.424655  973037 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/ca.key
	I0512 23:33:11.424704  973037 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/proxy-client-ca.key
	I0512 23:33:11.424782  973037 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/kindnet-20220512231715-516044/client.key
	I0512 23:33:11.424802  973037 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/kindnet-20220512231715-516044/client.crt with IP's: []
	I0512 23:33:11.608535  973037 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/kindnet-20220512231715-516044/client.crt ...
	I0512 23:33:11.608577  973037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/kindnet-20220512231715-516044/client.crt: {Name:mkc51c5b439c93cb5345bba8b9594553ca67ed99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:33:11.608816  973037 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/kindnet-20220512231715-516044/client.key ...
	I0512 23:33:11.608842  973037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/kindnet-20220512231715-516044/client.key: {Name:mk0bc1ef045f6401dfbda6bab8e2efd0fddb33c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:33:11.608990  973037 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/kindnet-20220512231715-516044/apiserver.key.c7fa3a9e
	I0512 23:33:11.609013  973037 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/kindnet-20220512231715-516044/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0512 23:33:12.009858  973037 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/kindnet-20220512231715-516044/apiserver.crt.c7fa3a9e ...
	I0512 23:33:12.009905  973037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/kindnet-20220512231715-516044/apiserver.crt.c7fa3a9e: {Name:mk94da033cc16a1d13dce162d2716ce63a79238c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:33:12.010167  973037 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/kindnet-20220512231715-516044/apiserver.key.c7fa3a9e ...
	I0512 23:33:12.010193  973037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/kindnet-20220512231715-516044/apiserver.key.c7fa3a9e: {Name:mkfae536b0ee203a4f1cc8b258b5ed1a8cfc3c58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:33:12.010310  973037 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/kindnet-20220512231715-516044/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/kindnet-20220512231715-516044/apiserver.crt
	I0512 23:33:12.010388  973037 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/kindnet-20220512231715-516044/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/kindnet-20220512231715-516044/apiserver.key
	I0512 23:33:12.010452  973037 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/kindnet-20220512231715-516044/proxy-client.key
	I0512 23:33:12.010467  973037 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/kindnet-20220512231715-516044/proxy-client.crt with IP's: []
	I0512 23:33:12.170569  973037 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/kindnet-20220512231715-516044/proxy-client.crt ...
	I0512 23:33:12.170602  973037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/kindnet-20220512231715-516044/proxy-client.crt: {Name:mk502633f476e6b3df00f11373a1e6e3e2ed1fd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:33:12.170810  973037 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/kindnet-20220512231715-516044/proxy-client.key ...
	I0512 23:33:12.170824  973037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/kindnet-20220512231715-516044/proxy-client.key: {Name:mk72c25da669be4b96cae66c2b819ab471457246 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:33:12.170992  973037 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/516044.pem (1338 bytes)
	W0512 23:33:12.171045  973037 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/516044_empty.pem, impossibly tiny 0 bytes
	I0512 23:33:12.171063  973037 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca-key.pem (1679 bytes)
	I0512 23:33:12.171085  973037 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/ca.pem (1078 bytes)
	I0512 23:33:12.171108  973037 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/cert.pem (1123 bytes)
	I0512 23:33:12.171140  973037 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/key.pem (1675 bytes)
	I0512 23:33:12.171180  973037 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/files/etc/ssl/certs/5160442.pem (1708 bytes)
	I0512 23:33:12.171710  973037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/kindnet-20220512231715-516044/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0512 23:33:12.191459  973037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/kindnet-20220512231715-516044/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0512 23:33:12.211681  973037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/kindnet-20220512231715-516044/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0512 23:33:12.231053  973037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/kindnet-20220512231715-516044/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0512 23:33:12.250184  973037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0512 23:33:12.268537  973037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0512 23:33:12.286997  973037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0512 23:33:12.304878  973037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0512 23:33:12.322569  973037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/files/etc/ssl/certs/5160442.pem --> /usr/share/ca-certificates/5160442.pem (1708 bytes)
	I0512 23:33:12.340233  973037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0512 23:33:12.358293  973037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/certs/516044.pem --> /usr/share/ca-certificates/516044.pem (1338 bytes)
	I0512 23:33:12.376685  973037 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0512 23:33:12.391305  973037 ssh_runner.go:195] Run: openssl version
	I0512 23:33:12.396991  973037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5160442.pem && ln -fs /usr/share/ca-certificates/5160442.pem /etc/ssl/certs/5160442.pem"
	I0512 23:33:12.406286  973037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5160442.pem
	I0512 23:33:12.410491  973037 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 12 22:55 /usr/share/ca-certificates/5160442.pem
	I0512 23:33:12.410550  973037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5160442.pem
	I0512 23:33:12.416524  973037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5160442.pem /etc/ssl/certs/3ec20f2e.0"
	I0512 23:33:12.426254  973037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0512 23:33:12.435576  973037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0512 23:33:12.439288  973037 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 12 22:51 /usr/share/ca-certificates/minikubeCA.pem
	I0512 23:33:12.439341  973037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0512 23:33:12.444942  973037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0512 23:33:12.454415  973037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516044.pem && ln -fs /usr/share/ca-certificates/516044.pem /etc/ssl/certs/516044.pem"
	I0512 23:33:12.462135  973037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516044.pem
	I0512 23:33:12.465374  973037 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 12 22:55 /usr/share/ca-certificates/516044.pem
	I0512 23:33:12.465422  973037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516044.pem
	I0512 23:33:12.470548  973037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516044.pem /etc/ssl/certs/51391683.0"
	I0512 23:33:12.479248  973037 kubeadm.go:391] StartCluster: {Name:kindnet-20220512231715-516044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:kindnet-20220512231715-516044 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0512 23:33:12.479413  973037 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0512 23:33:12.521574  973037 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0512 23:33:12.530264  973037 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0512 23:33:12.537386  973037 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0512 23:33:12.537437  973037 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0512 23:33:12.544396  973037 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0512 23:33:12.544435  973037 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0512 23:33:24.170711  973037 out.go:204]   - Generating certificates and keys ...
	I0512 23:33:24.173994  973037 out.go:204]   - Booting up control plane ...
	I0512 23:33:24.177063  973037 out.go:204]   - Configuring RBAC rules ...
	I0512 23:33:24.179253  973037 cni.go:95] Creating CNI manager for "kindnet"
	I0512 23:33:24.181216  973037 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0512 23:33:24.182798  973037 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0512 23:33:24.187849  973037 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0512 23:33:24.187871  973037 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0512 23:33:24.208196  973037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0512 23:33:25.420353  973037 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.212105282s)
	I0512 23:33:25.420430  973037 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0512 23:33:25.420497  973037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:33:25.420498  973037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=5812f8ec06db4997111dc3269784a7f664662f05 minikube.k8s.io/name=kindnet-20220512231715-516044 minikube.k8s.io/updated_at=2022_05_12T23_33_25_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:33:25.506923  973037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:33:25.507005  973037 ops.go:34] apiserver oom_adj: -16
	I0512 23:33:26.093540  973037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:33:26.593233  973037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:33:27.093477  973037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:33:27.593238  973037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:33:28.093290  973037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:33:28.593189  973037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:33:29.093221  973037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:33:29.593308  973037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:33:30.093619  973037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:33:30.593778  973037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:33:31.093553  973037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:33:31.593902  973037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:33:32.093226  973037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:33:32.593964  973037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:33:33.093917  973037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:33:33.593946  973037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:33:34.093647  973037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:33:34.593743  973037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:33:35.092986  973037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:33:35.593928  973037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:33:36.093318  973037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:33:36.593828  973037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 23:33:36.657806  973037 kubeadm.go:1020] duration metric: took 11.237366119s to wait for elevateKubeSystemPrivileges.
	I0512 23:33:36.657893  973037 kubeadm.go:393] StartCluster complete in 24.178654257s
	I0512 23:33:36.657928  973037 settings.go:142] acquiring lock: {Name:mkfe717360cf8b2fa45465ab4bd68ece68561c89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:33:36.658032  973037 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/kubeconfig
	I0512 23:33:36.659964  973037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/kubeconfig: {Name:mk0f3828db53b6683822ca2fe8148b87d561cdb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 23:33:37.177680  973037 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kindnet-20220512231715-516044" rescaled to 1
	I0512 23:33:37.177819  973037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0512 23:33:37.177905  973037 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0512 23:33:37.177965  973037 addons.go:65] Setting storage-provisioner=true in profile "kindnet-20220512231715-516044"
	I0512 23:33:37.177995  973037 addons.go:153] Setting addon storage-provisioner=true in "kindnet-20220512231715-516044"
	W0512 23:33:37.178003  973037 addons.go:165] addon storage-provisioner should already be in state true
	I0512 23:33:37.178005  973037 config.go:178] Loaded profile config "kindnet-20220512231715-516044": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 23:33:37.178051  973037 addons.go:65] Setting default-storageclass=true in profile "kindnet-20220512231715-516044"
	I0512 23:33:37.178053  973037 host.go:66] Checking if "kindnet-20220512231715-516044" exists ...
	I0512 23:33:37.178063  973037 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-20220512231715-516044"
	I0512 23:33:37.178423  973037 cli_runner.go:164] Run: docker container inspect kindnet-20220512231715-516044 --format={{.State.Status}}
	I0512 23:33:37.177754  973037 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0512 23:33:37.180412  973037 out.go:177] * Verifying Kubernetes components...
	I0512 23:33:37.178573  973037 cli_runner.go:164] Run: docker container inspect kindnet-20220512231715-516044 --format={{.State.Status}}
	I0512 23:33:37.182044  973037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 23:33:37.237417  973037 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0512 23:33:37.238836  973037 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0512 23:33:37.238863  973037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0512 23:33:37.238926  973037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220512231715-516044
	I0512 23:33:37.244849  973037 addons.go:153] Setting addon default-storageclass=true in "kindnet-20220512231715-516044"
	W0512 23:33:37.244882  973037 addons.go:165] addon default-storageclass should already be in state true
	I0512 23:33:37.244913  973037 host.go:66] Checking if "kindnet-20220512231715-516044" exists ...
	I0512 23:33:37.245580  973037 cli_runner.go:164] Run: docker container inspect kindnet-20220512231715-516044 --format={{.State.Status}}
	I0512 23:33:37.286745  973037 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49452 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/kindnet-20220512231715-516044/id_rsa Username:docker}
	I0512 23:33:37.293069  973037 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0512 23:33:37.293145  973037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0512 23:33:37.293231  973037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220512231715-516044
	I0512 23:33:37.298063  973037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0512 23:33:37.299954  973037 node_ready.go:35] waiting up to 5m0s for node "kindnet-20220512231715-516044" to be "Ready" ...
	I0512 23:33:37.348111  973037 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49452 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/kindnet-20220512231715-516044/id_rsa Username:docker}
	I0512 23:33:37.486721  973037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0512 23:33:37.491903  973037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0512 23:33:37.773689  973037 start.go:815] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0512 23:33:37.911411  973037 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0512 23:33:37.912484  973037 addons.go:417] enableAddons completed in 734.585442ms
	I0512 23:33:39.306860  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:33:41.307944  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:33:43.807088  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:33:46.306685  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:33:48.807243  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:33:51.307000  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:33:53.807768  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:33:56.307454  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:33:58.307557  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:34:00.806668  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:34:02.806984  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:34:04.807580  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:34:07.307409  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:34:09.807109  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:34:12.307364  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:34:14.307483  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:34:16.310060  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:34:18.806422  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:34:20.806665  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:34:22.807490  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:34:25.307229  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:34:27.307603  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:34:29.806529  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:34:31.806927  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:34:33.807410  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:34:36.306479  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:34:38.307001  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:34:40.308043  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:34:42.806682  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:34:44.807372  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:34:47.307898  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:34:49.806575  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:34:52.307535  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:34:54.810482  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:34:57.306924  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:34:59.807542  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:35:02.307366  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:35:04.807495  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:35:07.307628  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:35:09.806642  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:35:12.307368  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:35:14.307578  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:35:16.807537  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:35:19.306585  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:35:21.306723  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:35:23.307092  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:35:25.307708  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:35:27.806918  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:35:29.807495  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:35:32.307012  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:35:34.307816  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:35:36.806617  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:35:38.807818  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:35:41.306777  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:35:43.307324  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:35:45.806755  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:35:47.807423  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:35:49.807830  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:35:52.307047  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:35:54.806347  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:35:56.806692  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:35:58.807311  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:36:00.807961  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:36:03.306322  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:36:05.306488  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:36:07.306775  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:36:09.307552  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:36:11.806315  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:36:13.806733  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:36:16.306972  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:36:18.307251  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:36:20.307845  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:36:22.806943  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:36:25.307881  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:36:27.807038  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:36:30.306320  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:36:32.806306  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:36:34.806797  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:36:37.306800  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:36:39.806647  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:36:41.806998  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:36:43.807109  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:36:46.306236  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:36:48.306405  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:36:50.306791  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:36:52.806895  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:36:55.307128  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:36:57.806433  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:36:59.806879  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:37:02.306537  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:37:04.806520  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:37:06.806588  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:37:08.806867  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:37:11.306083  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:37:13.306322  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:37:15.306810  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:37:17.806241  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:37:19.806317  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:37:21.806616  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:37:24.306561  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:37:26.306785  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:37:28.307250  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:37:30.307747  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:37:32.806478  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:37:34.807312  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:37:37.307057  973037 node_ready.go:58] node "kindnet-20220512231715-516044" has status "Ready":"False"
	I0512 23:37:37.309300  973037 node_ready.go:38] duration metric: took 4m0.00931103s waiting for node "kindnet-20220512231715-516044" to be "Ready" ...
	I0512 23:37:37.311486  973037 out.go:177] 
	W0512 23:37:37.312995  973037 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 5m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0512 23:37:37.313014  973037 out.go:239] * 
	* 
	W0512 23:37:37.313760  973037 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0512 23:37:37.316044  973037 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (276.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (322.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default
E0512 23:36:16.391330  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/auto-20220512231715-516044/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.13698357s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default
E0512 23:36:44.075615  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/auto-20220512231715-516044/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.131794941s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.12780003s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default
E0512 23:37:07.581839  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/functional-20220512225541-516044/client.crt: no such file or directory
E0512 23:37:14.932139  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/ingress-addon-legacy-20220512225758-516044/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.125620748s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0512 23:37:20.347381  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/false-20220512231715-516044/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.129517448s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default
E0512 23:37:48.032355  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/false-20220512231715-516044/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.145639041s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0512 23:37:58.096418  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/cilium-20220512231715-516044/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default
E0512 23:38:21.749223  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/skaffold-20220512231237-516044/client.crt: no such file or directory
E0512 23:38:23.273391  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/addons-20220512225124-516044/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.140338804s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0512 23:38:25.781536  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/cilium-20220512231715-516044/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default
E0512 23:38:52.278832  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/no-preload-20220512231753-516044/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.134472577s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0512 23:39:01.283359  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/client.crt: no such file or directory
E0512 23:39:01.288659  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/client.crt: no such file or directory
E0512 23:39:01.298902  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/client.crt: no such file or directory
E0512 23:39:01.319147  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/client.crt: no such file or directory
E0512 23:39:01.359384  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/client.crt: no such file or directory
E0512 23:39:01.439706  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/client.crt: no such file or directory
E0512 23:39:01.600074  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/client.crt: no such file or directory
E0512 23:39:01.920762  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/client.crt: no such file or directory
E0512 23:39:02.561872  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/client.crt: no such file or directory
E0512 23:39:03.178709  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/default-k8s-different-port-20220512231821-516044/client.crt: no such file or directory
E0512 23:39:03.842233  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default
E0512 23:39:06.402586  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/client.crt: no such file or directory
E0512 23:39:11.522927  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/client.crt: no such file or directory
E0512 23:39:11.886649  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/ingress-addon-legacy-20220512225758-516044/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.130356556s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0512 23:39:21.763328  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/client.crt: no such file or directory
E0512 23:39:42.244128  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/client.crt: no such file or directory
E0512 23:39:44.557742  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/old-k8s-version-20220512231738-516044/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.123305758s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0512 23:40:23.204509  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.133946507s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.134096474s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/bridge/DNS (322.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (374.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.133438291s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default
E0512 23:41:16.391046  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/auto-20220512231715-516044/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.127119132s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.12647853s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.13448833s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default
E0512 23:42:07.582342  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/functional-20220512225541-516044/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.131485172s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default
E0512 23:42:20.347142  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/false-20220512231715-516044/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.138668496s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.134601832s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0512 23:42:58.096386  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/cilium-20220512231715-516044/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.122612438s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0512 23:43:21.749143  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/skaffold-20220512231237-516044/client.crt: no such file or directory
E0512 23:43:23.272647  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/addons-20220512225124-516044/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.1345756s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0512 23:43:52.278042  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/no-preload-20220512231753-516044/client.crt: no such file or directory
E0512 23:44:01.283313  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/client.crt: no such file or directory
E0512 23:44:03.178333  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/default-k8s-different-port-20220512231821-516044/client.crt: no such file or directory
E0512 23:44:11.886507  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/ingress-addon-legacy-20220512225758-516044/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default
E0512 23:44:28.965034  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/enable-default-cni-20220512231715-516044/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.130069591s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0512 23:44:44.557010  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/old-k8s-version-20220512231738-516044/client.crt: no such file or directory
E0512 23:45:15.323514  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/no-preload-20220512231753-516044/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default
E0512 23:45:26.222274  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/default-k8s-different-port-20220512231821-516044/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.157484482s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0512 23:46:02.386024  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/bridge-20220512231715-516044/client.crt: no such file or directory
E0512 23:46:02.391321  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/bridge-20220512231715-516044/client.crt: no such file or directory
E0512 23:46:02.401550  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/bridge-20220512231715-516044/client.crt: no such file or directory
E0512 23:46:02.421792  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/bridge-20220512231715-516044/client.crt: no such file or directory
E0512 23:46:02.462064  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/bridge-20220512231715-516044/client.crt: no such file or directory
E0512 23:46:02.542386  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/bridge-20220512231715-516044/client.crt: no such file or directory
E0512 23:46:02.702778  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/bridge-20220512231715-516044/client.crt: no such file or directory
E0512 23:46:03.023363  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/bridge-20220512231715-516044/client.crt: no such file or directory
E0512 23:46:03.664561  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/bridge-20220512231715-516044/client.crt: no such file or directory
E0512 23:46:04.944951  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/bridge-20220512231715-516044/client.crt: no such file or directory
E0512 23:46:07.505218  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/bridge-20220512231715-516044/client.crt: no such file or directory
E0512 23:46:07.601597  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/old-k8s-version-20220512231738-516044/client.crt: no such file or directory
E0512 23:46:12.626056  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/bridge-20220512231715-516044/client.crt: no such file or directory
E0512 23:46:16.391714  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/auto-20220512231715-516044/client.crt: no such file or directory
E0512 23:46:22.866599  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/bridge-20220512231715-516044/client.crt: no such file or directory
E0512 23:46:43.347544  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/bridge-20220512231715-516044/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.130126983s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/kubenet/DNS (374.79s)

                                                
                                    

Test pass (253/281)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 14.99
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.23.5/json-events 7.57
11 TestDownloadOnly/v1.23.5/preload-exists 0
15 TestDownloadOnly/v1.23.5/LogsDuration 0.08
17 TestDownloadOnly/v1.23.6-rc.0/json-events 8.79
18 TestDownloadOnly/v1.23.6-rc.0/preload-exists 0
22 TestDownloadOnly/v1.23.6-rc.0/LogsDuration 0.08
23 TestDownloadOnly/DeleteAll 0.33
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.21
25 TestDownloadOnlyKic 4.06
26 TestBinaryMirror 2.92
27 TestOffline 65.78
29 TestAddons/Setup 118.6
31 TestAddons/parallel/Registry 14.02
32 TestAddons/parallel/Ingress 22.94
33 TestAddons/parallel/MetricsServer 5.63
34 TestAddons/parallel/HelmTiller 11.59
36 TestAddons/parallel/CSI 40.42
38 TestAddons/serial/GCPAuth 39.92
39 TestAddons/StoppedEnableDisable 11.08
40 TestCertOptions 39.17
41 TestCertExpiration 228.12
42 TestDockerFlags 45.76
43 TestForceSystemdFlag 45.74
44 TestForceSystemdEnv 33.67
45 TestKVMDriverInstallOrUpdate 4.63
49 TestErrorSpam/setup 25.47
50 TestErrorSpam/start 0.98
51 TestErrorSpam/status 1.19
52 TestErrorSpam/pause 1.47
53 TestErrorSpam/unpause 1.63
54 TestErrorSpam/stop 10.94
57 TestFunctional/serial/CopySyncFile 0
58 TestFunctional/serial/StartWithProxy 38.5
59 TestFunctional/serial/AuditLog 0
60 TestFunctional/serial/SoftStart 5.26
61 TestFunctional/serial/KubeContext 0.04
62 TestFunctional/serial/KubectlGetPods 0.17
65 TestFunctional/serial/CacheCmd/cache/add_remote 3.15
66 TestFunctional/serial/CacheCmd/cache/add_local 1.69
67 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
68 TestFunctional/serial/CacheCmd/cache/list 0.06
69 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.37
70 TestFunctional/serial/CacheCmd/cache/cache_reload 1.97
71 TestFunctional/serial/CacheCmd/cache/delete 0.13
72 TestFunctional/serial/MinikubeKubectlCmd 0.11
73 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
74 TestFunctional/serial/ExtraConfig 31.91
75 TestFunctional/serial/ComponentHealth 0.06
76 TestFunctional/serial/LogsCmd 1.31
77 TestFunctional/serial/LogsFileCmd 1.36
79 TestFunctional/parallel/ConfigCmd 0.57
80 TestFunctional/parallel/DashboardCmd 14.49
81 TestFunctional/parallel/DryRun 0.63
82 TestFunctional/parallel/InternationalLanguage 0.26
83 TestFunctional/parallel/StatusCmd 1.3
86 TestFunctional/parallel/ServiceCmd 12.96
87 TestFunctional/parallel/ServiceCmdConnect 13.67
88 TestFunctional/parallel/AddonsCmd 0.22
89 TestFunctional/parallel/PersistentVolumeClaim 36.56
91 TestFunctional/parallel/SSHCmd 0.9
92 TestFunctional/parallel/CpCmd 2.03
93 TestFunctional/parallel/MySQL 24.66
94 TestFunctional/parallel/FileSync 0.5
95 TestFunctional/parallel/CertSync 2.74
99 TestFunctional/parallel/NodeLabels 0.09
101 TestFunctional/parallel/NonActiveRuntimeDisabled 0.36
103 TestFunctional/parallel/ProfileCmd/profile_not_create 0.64
105 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
107 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.24
108 TestFunctional/parallel/ProfileCmd/profile_list 0.56
109 TestFunctional/parallel/ProfileCmd/profile_json_output 0.47
110 TestFunctional/parallel/DockerEnv/bash 1.26
111 TestFunctional/parallel/Version/short 0.07
112 TestFunctional/parallel/Version/components 2.28
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.4
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
117 TestFunctional/parallel/ImageCommands/ImageBuild 3.29
118 TestFunctional/parallel/ImageCommands/Setup 2.2
119 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.06
120 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.98
121 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
122 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
126 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
127 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
128 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
129 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
130 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.93
131 TestFunctional/parallel/MountCmd/any-port 17.92
132 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.79
133 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
134 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.28
135 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 3.92
136 TestFunctional/parallel/MountCmd/specific-port 2.38
137 TestFunctional/delete_addon-resizer_images 0.09
138 TestFunctional/delete_my-image_image 0.03
139 TestFunctional/delete_minikube_cached_images 0.03
142 TestIngressAddonLegacy/StartLegacyK8sCluster 61.39
144 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.66
145 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.38
146 TestIngressAddonLegacy/serial/ValidateIngressAddons 35.35
149 TestJSONOutput/start/Command 40.35
150 TestJSONOutput/start/Audit 0
152 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
153 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
155 TestJSONOutput/pause/Command 0.69
156 TestJSONOutput/pause/Audit 0
158 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
161 TestJSONOutput/unpause/Command 0.6
162 TestJSONOutput/unpause/Audit 0
164 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/stop/Command 10.85
168 TestJSONOutput/stop/Audit 0
170 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
172 TestErrorJSONOutput 0.29
174 TestKicCustomNetwork/create_custom_network 26.78
175 TestKicCustomNetwork/use_default_bridge_network 26.89
176 TestKicExistingNetwork 26.89
177 TestKicCustomSubnet 27.28
178 TestMainNoArgs 0.06
181 TestMountStart/serial/StartWithMountFirst 5.48
182 TestMountStart/serial/VerifyMountFirst 0.35
183 TestMountStart/serial/StartWithMountSecond 5.87
184 TestMountStart/serial/VerifyMountSecond 0.35
185 TestMountStart/serial/DeleteFirst 1.72
186 TestMountStart/serial/VerifyMountPostDelete 0.35
187 TestMountStart/serial/Stop 1.27
188 TestMountStart/serial/RestartStopped 6.97
189 TestMountStart/serial/VerifyMountPostStop 0.35
192 TestMultiNode/serial/FreshStart2Nodes 71.81
193 TestMultiNode/serial/DeployApp2Nodes 4.93
194 TestMultiNode/serial/PingHostFrom2Pods 0.88
195 TestMultiNode/serial/AddNode 26.37
196 TestMultiNode/serial/ProfileList 0.38
197 TestMultiNode/serial/CopyFile 12.52
198 TestMultiNode/serial/StopNode 2.54
199 TestMultiNode/serial/StartAfterStop 24.81
200 TestMultiNode/serial/RestartKeepsNodes 101.61
201 TestMultiNode/serial/DeleteNode 5.27
202 TestMultiNode/serial/StopMultiNode 21.74
203 TestMultiNode/serial/RestartMultiNode 59.98
204 TestMultiNode/serial/ValidateNameConflict 27.88
209 TestPreload 116.96
211 TestScheduledStopUnix 98.39
212 TestSkaffold 56.72
214 TestInsufficientStorage 13.32
215 TestRunningBinaryUpgrade 60.14
217 TestKubernetesUpgrade 86.06
218 TestMissingContainerUpgrade 125.74
220 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
221 TestNoKubernetes/serial/StartWithK8s 43.77
222 TestNoKubernetes/serial/StartWithStopK8s 18.2
223 TestNoKubernetes/serial/Start 5.71
224 TestStoppedBinaryUpgrade/Setup 0.85
225 TestStoppedBinaryUpgrade/Upgrade 129.01
226 TestNoKubernetes/serial/VerifyK8sNotRunning 0.38
227 TestNoKubernetes/serial/ProfileList 1.67
228 TestNoKubernetes/serial/Stop 1.97
229 TestNoKubernetes/serial/StartNoArgs 7.22
230 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.67
238 TestStoppedBinaryUpgrade/MinikubeLogs 1.68
240 TestPause/serial/Start 45.21
253 TestStartStop/group/old-k8s-version/serial/FirstStart 125.6
254 TestPause/serial/SecondStartNoReconfiguration 6.46
256 TestStartStop/group/no-preload/serial/FirstStart 57.99
257 TestPause/serial/Pause 0.96
258 TestPause/serial/VerifyStatus 0.52
259 TestPause/serial/Unpause 0.8
260 TestPause/serial/PauseAgain 0.91
261 TestPause/serial/DeletePaused 2.68
262 TestPause/serial/VerifyDeletedResources 8.67
264 TestStartStop/group/embed-certs/serial/FirstStart 250.79
266 TestStartStop/group/default-k8s-different-port/serial/FirstStart 41.13
267 TestStartStop/group/no-preload/serial/DeployApp 9.53
268 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.71
269 TestStartStop/group/no-preload/serial/Stop 10.88
270 TestStartStop/group/default-k8s-different-port/serial/DeployApp 10.43
271 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
272 TestStartStop/group/no-preload/serial/SecondStart 335.08
273 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.63
274 TestStartStop/group/default-k8s-different-port/serial/Stop 10.9
275 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.25
276 TestStartStop/group/default-k8s-different-port/serial/SecondStart 332.36
277 TestStartStop/group/old-k8s-version/serial/DeployApp 9.44
278 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.61
279 TestStartStop/group/old-k8s-version/serial/Stop 10.99
280 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
281 TestStartStop/group/old-k8s-version/serial/SecondStart 402.54
282 TestStartStop/group/embed-certs/serial/DeployApp 10.41
283 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.65
284 TestStartStop/group/embed-certs/serial/Stop 10.88
285 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
286 TestStartStop/group/embed-certs/serial/SecondStart 592.91
287 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 15.05
288 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 18.01
289 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
290 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.45
291 TestStartStop/group/no-preload/serial/Pause 3.21
293 TestStartStop/group/newest-cni/serial/FirstStart 43.68
294 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.22
295 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.55
296 TestStartStop/group/default-k8s-different-port/serial/Pause 4.52
297 TestNetworkPlugins/group/auto/Start 46.78
298 TestStartStop/group/newest-cni/serial/DeployApp 0
299 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.75
300 TestStartStop/group/newest-cni/serial/Stop 10.76
301 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
302 TestStartStop/group/newest-cni/serial/SecondStart 20.16
303 TestNetworkPlugins/group/auto/KubeletFlags 0.38
304 TestNetworkPlugins/group/auto/NetCatPod 12.19
305 TestNetworkPlugins/group/auto/DNS 0.17
306 TestNetworkPlugins/group/auto/Localhost 0.19
307 TestNetworkPlugins/group/auto/HairPin 5.2
308 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
309 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
310 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.42
311 TestStartStop/group/newest-cni/serial/Pause 3.21
312 TestNetworkPlugins/group/false/Start 42.8
313 TestNetworkPlugins/group/cilium/Start 80.55
314 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
315 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
316 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.47
317 TestStartStop/group/old-k8s-version/serial/Pause 3.58
319 TestNetworkPlugins/group/false/KubeletFlags 0.4
320 TestNetworkPlugins/group/false/NetCatPod 11.25
321 TestNetworkPlugins/group/false/DNS 0.17
322 TestNetworkPlugins/group/false/Localhost 0.15
323 TestNetworkPlugins/group/false/HairPin 5.13
325 TestNetworkPlugins/group/cilium/ControllerPod 5.02
326 TestNetworkPlugins/group/cilium/KubeletFlags 0.59
327 TestNetworkPlugins/group/cilium/NetCatPod 11.14
328 TestNetworkPlugins/group/cilium/DNS 0.14
329 TestNetworkPlugins/group/cilium/Localhost 0.15
330 TestNetworkPlugins/group/cilium/HairPin 0.15
331 TestNetworkPlugins/group/enable-default-cni/Start 42.14
332 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.41
333 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.33
335 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
336 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
337 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.46
340 TestNetworkPlugins/group/bridge/Start 44.55
341 TestNetworkPlugins/group/kubenet/Start 285.93
342 TestNetworkPlugins/group/bridge/KubeletFlags 0.4
343 TestNetworkPlugins/group/bridge/NetCatPod 11.2
345 TestNetworkPlugins/group/kubenet/KubeletFlags 0.37
346 TestNetworkPlugins/group/kubenet/NetCatPod 10.26
x
+
TestDownloadOnly/v1.16.0/json-events (14.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220512225045-516044 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220512225045-516044 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (14.993993637s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (14.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220512225045-516044
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220512225045-516044: exit status 85 (79.799978ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/12 22:50:45
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.18.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0512 22:50:45.406397  516056 out.go:296] Setting OutFile to fd 1 ...
	I0512 22:50:45.406799  516056 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 22:50:45.406816  516056 out.go:309] Setting ErrFile to fd 2...
	I0512 22:50:45.406824  516056 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 22:50:45.407037  516056 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/bin
	W0512 22:50:45.407265  516056 root.go:300] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/config/config.json: no such file or directory
	I0512 22:50:45.407616  516056 out.go:303] Setting JSON to true
	I0512 22:50:45.408664  516056 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":20001,"bootTime":1652375844,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0512 22:50:45.408752  516056 start.go:125] virtualization: kvm guest
	I0512 22:50:45.411252  516056 out.go:97] [download-only-20220512225045-516044] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	W0512 22:50:45.411345  516056 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cache/preloaded-tarball: no such file or directory
	I0512 22:50:45.412832  516056 out.go:169] MINIKUBE_LOCATION=12739
	I0512 22:50:45.411372  516056 notify.go:193] Checking for updates...
	I0512 22:50:45.415463  516056 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0512 22:50:45.416856  516056 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/kubeconfig
	I0512 22:50:45.418272  516056 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube
	I0512 22:50:45.419584  516056 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0512 22:50:45.422041  516056 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0512 22:50:45.422248  516056 driver.go:358] Setting default libvirt URI to qemu:///system
	I0512 22:50:45.459232  516056 docker.go:137] docker version: linux-20.10.16
	I0512 22:50:45.459312  516056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 22:50:45.859874  516056 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:93 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:34 SystemTime:2022-05-12 22:50:45.486621817 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0512 22:50:45.859985  516056 docker.go:254] overlay module found
	I0512 22:50:45.862017  516056 out.go:97] Using the docker driver based on user configuration
	I0512 22:50:45.862044  516056 start.go:284] selected driver: docker
	I0512 22:50:45.862050  516056 start.go:806] validating driver "docker" against <nil>
	I0512 22:50:45.862279  516056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 22:50:45.967377  516056 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:93 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:34 SystemTime:2022-05-12 22:50:45.890691653 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0512 22:50:45.967548  516056 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0512 22:50:45.968261  516056 start_flags.go:373] Using suggested 8000MB memory alloc based on sys=32103MB, container=32103MB
	I0512 22:50:45.968461  516056 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0512 22:50:45.970557  516056 out.go:169] Using Docker driver with the root privilege
	I0512 22:50:45.971927  516056 cni.go:95] Creating CNI manager for ""
	I0512 22:50:45.971952  516056 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0512 22:50:45.971961  516056 start_flags.go:306] config:
	{Name:download-only-20220512225045-516044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220512225045-516044 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0512 22:50:45.973530  516056 out.go:97] Starting control plane node download-only-20220512225045-516044 in cluster download-only-20220512225045-516044
	I0512 22:50:45.973554  516056 cache.go:120] Beginning downloading kic base image for docker with docker
	I0512 22:50:45.974839  516056 out.go:97] Pulling base image ...
	I0512 22:50:45.974865  516056 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0512 22:50:45.974965  516056 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0512 22:50:46.018443  516056 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon, skipping pull
	I0512 22:50:46.018470  516056 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0512 22:50:46.018767  516056 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0512 22:50:46.018855  516056 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0512 22:50:46.083441  516056 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0512 22:50:46.083474  516056 cache.go:57] Caching tarball of preloaded images
	I0512 22:50:46.083666  516056 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0512 22:50:46.085897  516056 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0512 22:50:46.085926  516056 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0512 22:50:46.204258  516056 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0512 22:50:50.064843  516056 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0512 22:50:50.064945  516056 preload.go:256] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0512 22:50:50.747951  516056 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0512 22:50:50.748294  516056 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/download-only-20220512225045-516044/config.json ...
	I0512 22:50:50.748329  516056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/download-only-20220512225045-516044/config.json: {Name:mkb307e84e4ba6ad06662736b2474250e44b292e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 22:50:50.748549  516056 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0512 22:50:50.748742  516056 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220512225045-516044"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/json-events (7.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220512225045-516044 --force --alsologtostderr --kubernetes-version=v1.23.5 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220512225045-516044 --force --alsologtostderr --kubernetes-version=v1.23.5 --container-runtime=docker --driver=docker  --container-runtime=docker: (7.572404271s)
--- PASS: TestDownloadOnly/v1.23.5/json-events (7.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/preload-exists
--- PASS: TestDownloadOnly/v1.23.5/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220512225045-516044
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220512225045-516044: exit status 85 (82.257226ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/12 22:51:00
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.18.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0512 22:51:00.482235  516221 out.go:296] Setting OutFile to fd 1 ...
	I0512 22:51:00.482416  516221 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 22:51:00.482427  516221 out.go:309] Setting ErrFile to fd 2...
	I0512 22:51:00.482431  516221 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 22:51:00.482531  516221 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/bin
	W0512 22:51:00.482645  516221 root.go:300] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/config/config.json: no such file or directory
	I0512 22:51:00.482760  516221 out.go:303] Setting JSON to true
	I0512 22:51:00.483554  516221 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":20017,"bootTime":1652375844,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0512 22:51:00.483613  516221 start.go:125] virtualization: kvm guest
	I0512 22:51:00.485914  516221 out.go:97] [download-only-20220512225045-516044] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0512 22:51:00.487557  516221 out.go:169] MINIKUBE_LOCATION=12739
	I0512 22:51:00.486056  516221 notify.go:193] Checking for updates...
	I0512 22:51:00.490238  516221 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0512 22:51:00.491699  516221 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/kubeconfig
	I0512 22:51:00.493237  516221 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube
	I0512 22:51:00.494626  516221 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0512 22:51:00.497025  516221 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0512 22:51:00.497427  516221 config.go:178] Loaded profile config "download-only-20220512225045-516044": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0512 22:51:00.497500  516221 start.go:714] api.Load failed for download-only-20220512225045-516044: filestore "download-only-20220512225045-516044": Docker machine "download-only-20220512225045-516044" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0512 22:51:00.497553  516221 driver.go:358] Setting default libvirt URI to qemu:///system
	W0512 22:51:00.497584  516221 start.go:714] api.Load failed for download-only-20220512225045-516044: filestore "download-only-20220512225045-516044": Docker machine "download-only-20220512225045-516044" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0512 22:51:00.534320  516221 docker.go:137] docker version: linux-20.10.16
	I0512 22:51:00.534405  516221 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 22:51:00.634574  516221 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:93 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:34 SystemTime:2022-05-12 22:51:00.560766116 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0512 22:51:00.634687  516221 docker.go:254] overlay module found
	I0512 22:51:00.636566  516221 out.go:97] Using the docker driver based on existing profile
	I0512 22:51:00.636583  516221 start.go:284] selected driver: docker
	I0512 22:51:00.636588  516221 start.go:806] validating driver "docker" against &{Name:download-only-20220512225045-516044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220512225045-516044 Na
mespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0512 22:51:00.636874  516221 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 22:51:00.734951  516221 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:93 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:34 SystemTime:2022-05-12 22:51:00.663580613 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0512 22:51:00.735529  516221 cni.go:95] Creating CNI manager for ""
	I0512 22:51:00.735545  516221 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0512 22:51:00.735556  516221 start_flags.go:306] config:
	{Name:download-only-20220512225045-516044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:download-only-20220512225045-516044 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0512 22:51:00.737414  516221 out.go:97] Starting control plane node download-only-20220512225045-516044 in cluster download-only-20220512225045-516044
	I0512 22:51:00.737451  516221 cache.go:120] Beginning downloading kic base image for docker with docker
	I0512 22:51:00.738720  516221 out.go:97] Pulling base image ...
	I0512 22:51:00.738746  516221 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 22:51:00.738863  516221 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0512 22:51:00.780437  516221 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon, skipping pull
	I0512 22:51:00.780471  516221 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0512 22:51:00.780754  516221 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0512 22:51:00.780790  516221 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
	I0512 22:51:00.780798  516221 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
	I0512 22:51:00.780823  516221 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0512 22:51:00.844683  516221 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.5/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0512 22:51:00.844709  516221 cache.go:57] Caching tarball of preloaded images
	I0512 22:51:00.844893  516221 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 22:51:00.846949  516221 out.go:97] Downloading Kubernetes v1.23.5 preload ...
	I0512 22:51:00.846970  516221 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4 ...
	I0512 22:51:00.960918  516221 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.5/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4?checksum=md5:d0fb3d86acaea9a7773bdef3468eac56 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220512225045-516044"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.5/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/json-events (8.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220512225045-516044 --force --alsologtostderr --kubernetes-version=v1.23.6-rc.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220512225045-516044 --force --alsologtostderr --kubernetes-version=v1.23.6-rc.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (8.791471228s)
--- PASS: TestDownloadOnly/v1.23.6-rc.0/json-events (8.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.23.6-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220512225045-516044
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220512225045-516044: exit status 85 (79.852238ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/12 22:51:08
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.18.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0512 22:51:08.139902  516389 out.go:296] Setting OutFile to fd 1 ...
	I0512 22:51:08.140047  516389 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 22:51:08.140062  516389 out.go:309] Setting ErrFile to fd 2...
	I0512 22:51:08.140069  516389 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 22:51:08.140174  516389 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/bin
	W0512 22:51:08.140294  516389 root.go:300] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/config/config.json: no such file or directory
	I0512 22:51:08.140404  516389 out.go:303] Setting JSON to true
	I0512 22:51:08.141319  516389 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":20024,"bootTime":1652375844,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0512 22:51:08.141384  516389 start.go:125] virtualization: kvm guest
	I0512 22:51:08.143792  516389 out.go:97] [download-only-20220512225045-516044] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0512 22:51:08.145438  516389 out.go:169] MINIKUBE_LOCATION=12739
	I0512 22:51:08.143955  516389 notify.go:193] Checking for updates...
	I0512 22:51:08.148273  516389 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0512 22:51:08.149762  516389 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/kubeconfig
	I0512 22:51:08.151225  516389 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube
	I0512 22:51:08.152687  516389 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0512 22:51:08.155075  516389 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0512 22:51:08.155493  516389 config.go:178] Loaded profile config "download-only-20220512225045-516044": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	W0512 22:51:08.155549  516389 start.go:714] api.Load failed for download-only-20220512225045-516044: filestore "download-only-20220512225045-516044": Docker machine "download-only-20220512225045-516044" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0512 22:51:08.155610  516389 driver.go:358] Setting default libvirt URI to qemu:///system
	W0512 22:51:08.155648  516389 start.go:714] api.Load failed for download-only-20220512225045-516044: filestore "download-only-20220512225045-516044": Docker machine "download-only-20220512225045-516044" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0512 22:51:08.193177  516389 docker.go:137] docker version: linux-20.10.16
	I0512 22:51:08.193246  516389 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 22:51:08.293378  516389 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:93 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:34 SystemTime:2022-05-12 22:51:08.219420642 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0512 22:51:08.293497  516389 docker.go:254] overlay module found
	I0512 22:51:08.295450  516389 out.go:97] Using the docker driver based on existing profile
	I0512 22:51:08.295476  516389 start.go:284] selected driver: docker
	I0512 22:51:08.295489  516389 start.go:806] validating driver "docker" against &{Name:download-only-20220512225045-516044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:download-only-20220512225045-516044 Na
mespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0512 22:51:08.295762  516389 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 22:51:08.393554  516389 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:93 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:34 SystemTime:2022-05-12 22:51:08.321188053 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0512 22:51:08.394079  516389 cni.go:95] Creating CNI manager for ""
	I0512 22:51:08.394094  516389 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0512 22:51:08.394106  516389 start_flags.go:306] config:
	{Name:download-only-20220512225045-516044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6-rc.0 ClusterName:download-only-20220512225045-516044 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0512 22:51:08.396109  516389 out.go:97] Starting control plane node download-only-20220512225045-516044 in cluster download-only-20220512225045-516044
	I0512 22:51:08.396138  516389 cache.go:120] Beginning downloading kic base image for docker with docker
	I0512 22:51:08.397381  516389 out.go:97] Pulling base image ...
	I0512 22:51:08.397407  516389 preload.go:132] Checking if preload exists for k8s version v1.23.6-rc.0 and runtime docker
	I0512 22:51:08.397513  516389 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
	I0512 22:51:08.438486  516389 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon, skipping pull
	I0512 22:51:08.438513  516389 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c to local cache
	I0512 22:51:08.438763  516389 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory
	I0512 22:51:08.438788  516389 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local cache directory, skipping pull
	I0512 22:51:08.438797  516389 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in cache, skipping pull
	I0512 22:51:08.438820  516389 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c as a tarball
	I0512 22:51:08.504124  516389 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.6-rc.0/preloaded-images-k8s-v18-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4
	I0512 22:51:08.504147  516389 cache.go:57] Caching tarball of preloaded images
	I0512 22:51:08.504315  516389 preload.go:132] Checking if preload exists for k8s version v1.23.6-rc.0 and runtime docker
	I0512 22:51:08.506397  516389 out.go:97] Downloading Kubernetes v1.23.6-rc.0 preload ...
	I0512 22:51:08.506423  516389 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0512 22:51:08.619665  516389 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.6-rc.0/preloaded-images-k8s-v18-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8c474a02b5d7628fe0abb1816ff0a9c8 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220512225045-516044"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.6-rc.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.33s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.33s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20220512225045-516044
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnlyKic (4.06s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-20220512225117-516044 --force --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:228: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-20220512225117-516044 --force --alsologtostderr --driver=docker  --container-runtime=docker: (2.664565701s)
helpers_test.go:175: Cleaning up "download-docker-20220512225117-516044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-20220512225117-516044
--- PASS: TestDownloadOnlyKic (4.06s)

                                                
                                    
x
+
TestBinaryMirror (2.92s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-20220512225121-516044 --alsologtostderr --binary-mirror http://127.0.0.1:34585 --driver=docker  --container-runtime=docker
aaa_download_only_test.go:310: (dbg) Done: out/minikube-linux-amd64 start --download-only -p binary-mirror-20220512225121-516044 --alsologtostderr --binary-mirror http://127.0.0.1:34585 --driver=docker  --container-runtime=docker: (2.534827292s)
helpers_test.go:175: Cleaning up "binary-mirror-20220512225121-516044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-20220512225121-516044
--- PASS: TestBinaryMirror (2.92s)

                                                
                                    
x
+
TestOffline (65.78s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-20220512231347-516044 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-20220512231347-516044 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m2.999832202s)
helpers_test.go:175: Cleaning up "offline-docker-20220512231347-516044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-20220512231347-516044
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-20220512231347-516044: (2.783263833s)
--- PASS: TestOffline (65.78s)

                                                
                                    
x
+
TestAddons/Setup (118.6s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:75: (dbg) Run:  out/minikube-linux-amd64 start -p addons-20220512225124-516044 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:75: (dbg) Done: out/minikube-linux-amd64 start -p addons-20220512225124-516044 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (1m58.598976683s)
--- PASS: TestAddons/Setup (118.60s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:280: registry stabilized in 11.587329ms
addons_test.go:282: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-5g85c" [5441c2ca-8842-47b2-9ed4-53cae9c9628b] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00820708s
addons_test.go:285: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-9mgmk" [452688da-c432-4c80-b7e6-5ca753d6cec4] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008688377s
addons_test.go:290: (dbg) Run:  kubectl --context addons-20220512225124-516044 delete po -l run=registry-test --now
addons_test.go:295: (dbg) Run:  kubectl --context addons-20220512225124-516044 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: (dbg) Done: kubectl --context addons-20220512225124-516044 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.825012742s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220512225124-516044 ip
2022/05/12 22:53:36 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:338: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220512225124-516044 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.02s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (22.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:162: (dbg) Run:  kubectl --context addons-20220512225124-516044 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:182: (dbg) Run:  kubectl --context addons-20220512225124-516044 replace --force -f testdata/nginx-ingress-v1.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:195: (dbg) Run:  kubectl --context addons-20220512225124-516044 replace --force -f testdata/nginx-pod-svc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:200: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [a5b31c32-4281-4861-97fb-1686693dc83d] Pending

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [a5b31c32-4281-4861-97fb-1686693dc83d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [a5b31c32-4281-4861-97fb-1686693dc83d] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:200: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.00806892s
addons_test.go:212: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220512225124-516044 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:236: (dbg) Run:  kubectl --context addons-20220512225124-516044 replace --force -f testdata/ingress-dns-example-v1.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220512225124-516044 ip
addons_test.go:247: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220512225124-516044 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-20220512225124-516044 addons disable ingress-dns --alsologtostderr -v=1: (1.742896083s)
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220512225124-516044 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:261: (dbg) Done: out/minikube-linux-amd64 -p addons-20220512225124-516044 addons disable ingress --alsologtostderr -v=1: (7.574348068s)
--- PASS: TestAddons/parallel/Ingress (22.94s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:357: metrics-server stabilized in 9.83969ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:342: "metrics-server-bd6f4dd56-z8sxt" [b4803019-7089-4076-bef9-79bb7179c553] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.007985253s
addons_test.go:365: (dbg) Run:  kubectl --context addons-20220512225124-516044 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220512225124-516044 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.63s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.59s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:406: tiller-deploy stabilized in 2.286507ms
addons_test.go:408: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:342: "tiller-deploy-6d67d5465d-tbgrt" [64ba8a6c-bda0-41db-9e75-43d193005d94] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.008617013s
addons_test.go:423: (dbg) Run:  kubectl --context addons-20220512225124-516044 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: (dbg) Done: kubectl --context addons-20220512225124-516044 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.048603344s)
addons_test.go:440: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220512225124-516044 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.59s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.42s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:511: csi-hostpath-driver pods stabilized in 11.705814ms
addons_test.go:514: (dbg) Run:  kubectl --context addons-20220512225124-516044 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:519: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220512225124-516044 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:524: (dbg) Run:  kubectl --context addons-20220512225124-516044 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:529: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [fd197b5c-53a8-4386-8441-74f674798157] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [fd197b5c-53a8-4386-8441-74f674798157] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [fd197b5c-53a8-4386-8441-74f674798157] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:529: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.028661563s
addons_test.go:534: (dbg) Run:  kubectl --context addons-20220512225124-516044 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:539: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220512225124-516044 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220512225124-516044 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:544: (dbg) Run:  kubectl --context addons-20220512225124-516044 delete pod task-pv-pod
addons_test.go:544: (dbg) Done: kubectl --context addons-20220512225124-516044 delete pod task-pv-pod: (1.755610399s)
addons_test.go:550: (dbg) Run:  kubectl --context addons-20220512225124-516044 delete pvc hpvc
addons_test.go:556: (dbg) Run:  kubectl --context addons-20220512225124-516044 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:561: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220512225124-516044 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:566: (dbg) Run:  kubectl --context addons-20220512225124-516044 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [bf39ca0c-7e11-4314-a692-ac10d58cb854] Pending
helpers_test.go:342: "task-pv-pod-restore" [bf39ca0c-7e11-4314-a692-ac10d58cb854] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [bf39ca0c-7e11-4314-a692-ac10d58cb854] Running
addons_test.go:571: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 12.006263581s
addons_test.go:576: (dbg) Run:  kubectl --context addons-20220512225124-516044 delete pod task-pv-pod-restore
addons_test.go:576: (dbg) Done: kubectl --context addons-20220512225124-516044 delete pod task-pv-pod-restore: (1.363621835s)
addons_test.go:580: (dbg) Run:  kubectl --context addons-20220512225124-516044 delete pvc hpvc-restore
addons_test.go:584: (dbg) Run:  kubectl --context addons-20220512225124-516044 delete volumesnapshot new-snapshot-demo
addons_test.go:588: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220512225124-516044 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:588: (dbg) Done: out/minikube-linux-amd64 -p addons-20220512225124-516044 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.908484286s)
addons_test.go:592: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220512225124-516044 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (40.42s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (39.92s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:603: (dbg) Run:  kubectl --context addons-20220512225124-516044 create -f testdata/busybox.yaml
addons_test.go:609: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [42954e2e-14c1-47ca-8c94-82113a5875bf] Pending
helpers_test.go:342: "busybox" [42954e2e-14c1-47ca-8c94-82113a5875bf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [42954e2e-14c1-47ca-8c94-82113a5875bf] Running
addons_test.go:609: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 9.006258361s
addons_test.go:615: (dbg) Run:  kubectl --context addons-20220512225124-516044 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:652: (dbg) Run:  kubectl --context addons-20220512225124-516044 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:665: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220512225124-516044 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:665: (dbg) Done: out/minikube-linux-amd64 -p addons-20220512225124-516044 addons disable gcp-auth --alsologtostderr -v=1: (5.775955973s)
addons_test.go:681: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220512225124-516044 addons enable gcp-auth
addons_test.go:687: (dbg) Run:  kubectl --context addons-20220512225124-516044 apply -f testdata/private-image.yaml
addons_test.go:694: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:342: "private-image-7f8587d5b7-c2m7z" [9833bcff-d51e-44ff-9084-afb11caba960] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])
helpers_test.go:342: "private-image-7f8587d5b7-c2m7z" [9833bcff-d51e-44ff-9084-afb11caba960] Running
addons_test.go:694: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image healthy within 15.006442016s
addons_test.go:700: (dbg) Run:  kubectl --context addons-20220512225124-516044 apply -f testdata/private-image-eu.yaml
addons_test.go:705: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:342: "private-image-eu-869dcfd8c7-486tj" [dcbf1b93-6eaf-4c01-986f-96e7f51b4c0f] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])
helpers_test.go:342: "private-image-eu-869dcfd8c7-486tj" [dcbf1b93-6eaf-4c01-986f-96e7f51b4c0f] Running
addons_test.go:705: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image-eu healthy within 8.006228589s
--- PASS: TestAddons/serial/GCPAuth (39.92s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.08s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:132: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-20220512225124-516044
addons_test.go:132: (dbg) Done: out/minikube-linux-amd64 stop -p addons-20220512225124-516044: (10.877366579s)
addons_test.go:136: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-20220512225124-516044
addons_test.go:140: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-20220512225124-516044
--- PASS: TestAddons/StoppedEnableDisable (11.08s)

                                                
                                    
x
+
TestCertOptions (39.17s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20220512231433-516044 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
E0512 23:14:46.319479  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/addons-20220512225124-516044/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20220512231433-516044 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (35.788344725s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20220512231433-516044 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-20220512231433-516044 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-20220512231433-516044 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-20220512231433-516044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20220512231433-516044

                                                
                                                
=== CONT  TestCertOptions
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20220512231433-516044: (2.540885768s)
--- PASS: TestCertOptions (39.17s)

                                                
                                    
x
+
TestCertExpiration (228.12s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220512231433-516044 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220512231433-516044 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (35.769227084s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220512231433-516044 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220512231433-516044 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (4.643741429s)
helpers_test.go:175: Cleaning up "cert-expiration-20220512231433-516044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-20220512231433-516044

                                                
                                                
=== CONT  TestCertExpiration
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-20220512231433-516044: (7.700882297s)
--- PASS: TestCertExpiration (228.12s)

                                                
                                    
x
+
TestDockerFlags (45.76s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-20220512231347-516044 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-20220512231347-516044 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (41.744803735s)
docker_test.go:50: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-20220512231347-516044 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-20220512231347-516044 ssh "sudo systemctl show docker --property=ExecStart --no-pager"

                                                
                                                
=== CONT  TestDockerFlags
helpers_test.go:175: Cleaning up "docker-flags-20220512231347-516044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-20220512231347-516044

                                                
                                                
=== CONT  TestDockerFlags
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-20220512231347-516044: (2.895871375s)
--- PASS: TestDockerFlags (45.76s)

                                                
                                    
x
+
TestForceSystemdFlag (45.74s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20220512231347-516044 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20220512231347-516044 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (42.468096367s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-20220512231347-516044 ssh "docker info --format {{.CgroupDriver}}"

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:175: Cleaning up "force-systemd-flag-20220512231347-516044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20220512231347-516044

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20220512231347-516044: (2.680363071s)
--- PASS: TestForceSystemdFlag (45.74s)

                                                
                                    
x
+
TestForceSystemdEnv (33.67s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20220512231720-516044 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20220512231720-516044 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (30.66279383s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-20220512231720-516044 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-20220512231720-516044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20220512231720-516044

                                                
                                                
=== CONT  TestForceSystemdEnv
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20220512231720-516044: (2.543776301s)
--- PASS: TestForceSystemdEnv (33.67s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.63s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.63s)

                                                
                                    
x
+
TestErrorSpam/setup (25.47s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20220512225457-516044 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220512225457-516044 --driver=docker  --container-runtime=docker
error_spam_test.go:78: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20220512225457-516044 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220512225457-516044 --driver=docker  --container-runtime=docker: (25.47075217s)
--- PASS: TestErrorSpam/setup (25.47s)

                                                
                                    
x
+
TestErrorSpam/start (0.98s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220512225457-516044 --log_dir /tmp/nospam-20220512225457-516044 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220512225457-516044 --log_dir /tmp/nospam-20220512225457-516044 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220512225457-516044 --log_dir /tmp/nospam-20220512225457-516044 start --dry-run
--- PASS: TestErrorSpam/start (0.98s)

                                                
                                    
x
+
TestErrorSpam/status (1.19s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220512225457-516044 --log_dir /tmp/nospam-20220512225457-516044 status
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220512225457-516044 --log_dir /tmp/nospam-20220512225457-516044 status
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220512225457-516044 --log_dir /tmp/nospam-20220512225457-516044 status
--- PASS: TestErrorSpam/status (1.19s)

                                                
                                    
x
+
TestErrorSpam/pause (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220512225457-516044 --log_dir /tmp/nospam-20220512225457-516044 pause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220512225457-516044 --log_dir /tmp/nospam-20220512225457-516044 pause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220512225457-516044 --log_dir /tmp/nospam-20220512225457-516044 pause
--- PASS: TestErrorSpam/pause (1.47s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220512225457-516044 --log_dir /tmp/nospam-20220512225457-516044 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220512225457-516044 --log_dir /tmp/nospam-20220512225457-516044 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220512225457-516044 --log_dir /tmp/nospam-20220512225457-516044 unpause
--- PASS: TestErrorSpam/unpause (1.63s)

                                                
                                    
x
+
TestErrorSpam/stop (10.94s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220512225457-516044 --log_dir /tmp/nospam-20220512225457-516044 stop
error_spam_test.go:156: (dbg) Done: out/minikube-linux-amd64 -p nospam-20220512225457-516044 --log_dir /tmp/nospam-20220512225457-516044 stop: (10.670411814s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220512225457-516044 --log_dir /tmp/nospam-20220512225457-516044 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220512225457-516044 --log_dir /tmp/nospam-20220512225457-516044 stop
--- PASS: TestErrorSpam/stop (10.94s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1784: local sync path: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/files/etc/test/nested/copy/516044/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (38.5s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2163: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220512225541-516044 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2163: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220512225541-516044 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (38.495419809s)
--- PASS: TestFunctional/serial/StartWithProxy (38.50s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:654: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220512225541-516044 --alsologtostderr -v=8
functional_test.go:654: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220512225541-516044 --alsologtostderr -v=8: (5.262803998s)
functional_test.go:658: soft start took 5.263451258s for "functional-20220512225541-516044" cluster.
--- PASS: TestFunctional/serial/SoftStart (5.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:676: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:691: (dbg) Run:  kubectl --context functional-20220512225541-516044 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 cache add k8s.gcr.io/pause:3.1
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 cache add k8s.gcr.io/pause:3.3
functional_test.go:1044: (dbg) Done: out/minikube-linux-amd64 -p functional-20220512225541-516044 cache add k8s.gcr.io/pause:3.3: (1.460884104s)
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 cache add k8s.gcr.io/pause:latest
functional_test.go:1044: (dbg) Done: out/minikube-linux-amd64 -p functional-20220512225541-516044 cache add k8s.gcr.io/pause:latest: (1.181541858s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1072: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220512225541-516044 /tmp/TestFunctionalserialCacheCmdcacheadd_local1557679918/001
functional_test.go:1084: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 cache add minikube-local-cache-test:functional-20220512225541-516044
functional_test.go:1084: (dbg) Done: out/minikube-linux-amd64 -p functional-20220512225541-516044 cache add minikube-local-cache-test:functional-20220512225541-516044: (1.391075101s)
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 cache delete minikube-local-cache-test:functional-20220512225541-516044
functional_test.go:1078: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220512225541-516044
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1097: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1105: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.97s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1142: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1148: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1148: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220512225541-516044 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (380.269419ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 cache reload
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.97s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:711: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 kubectl -- --context functional-20220512225541-516044 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:736: (dbg) Run:  out/kubectl --context functional-20220512225541-516044 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.91s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:752: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220512225541-516044 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:752: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220512225541-516044 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.911383611s)
functional_test.go:756: restart took 31.91149911s for "functional-20220512225541-516044" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (31.91s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:805: (dbg) Run:  kubectl --context functional-20220512225541-516044 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:820: etcd phase: Running
functional_test.go:830: etcd status: Ready
functional_test.go:820: kube-apiserver phase: Running
functional_test.go:830: kube-apiserver status: Ready
functional_test.go:820: kube-controller-manager phase: Running
functional_test.go:830: kube-controller-manager status: Ready
functional_test.go:820: kube-scheduler phase: Running
functional_test.go:830: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1231: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 logs
functional_test.go:1231: (dbg) Done: out/minikube-linux-amd64 -p functional-20220512225541-516044 logs: (1.312545563s)
--- PASS: TestFunctional/serial/LogsCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1245: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 logs --file /tmp/TestFunctionalserialLogsFileCmd3072630297/001/logs.txt
functional_test.go:1245: (dbg) Done: out/minikube-linux-amd64 -p functional-20220512225541-516044 logs --file /tmp/TestFunctionalserialLogsFileCmd3072630297/001/logs.txt: (1.363868853s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220512225541-516044 config get cpus: exit status 14 (89.754285ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 config set cpus 2

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 config get cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220512225541-516044 config get cpus: exit status 14 (79.591069ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:900: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220512225541-516044 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220512225541-516044 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 554346: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.49s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:969: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220512225541-516044 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:969: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220512225541-516044 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (257.60648ms)

                                                
                                                
-- stdout --
	* [functional-20220512225541-516044] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0512 22:57:33.454114  553854 out.go:296] Setting OutFile to fd 1 ...
	I0512 22:57:33.454227  553854 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 22:57:33.454237  553854 out.go:309] Setting ErrFile to fd 2...
	I0512 22:57:33.454242  553854 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 22:57:33.454326  553854 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/bin
	I0512 22:57:33.454547  553854 out.go:303] Setting JSON to false
	I0512 22:57:33.455678  553854 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":20409,"bootTime":1652375844,"procs":448,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0512 22:57:33.455743  553854 start.go:125] virtualization: kvm guest
	I0512 22:57:33.458394  553854 out.go:177] * [functional-20220512225541-516044] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0512 22:57:33.459751  553854 out.go:177]   - MINIKUBE_LOCATION=12739
	I0512 22:57:33.461103  553854 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0512 22:57:33.462466  553854 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/kubeconfig
	I0512 22:57:33.463670  553854 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube
	I0512 22:57:33.465001  553854 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0512 22:57:33.466709  553854 config.go:178] Loaded profile config "functional-20220512225541-516044": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 22:57:33.467083  553854 driver.go:358] Setting default libvirt URI to qemu:///system
	I0512 22:57:33.513515  553854 docker.go:137] docker version: linux-20.10.16
	I0512 22:57:33.513662  553854 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 22:57:33.632204  553854 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:94 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:39 SystemTime:2022-05-12 22:57:33.54489366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0512 22:57:33.632326  553854 docker.go:254] overlay module found
	I0512 22:57:33.634562  553854 out.go:177] * Using the docker driver based on existing profile
	I0512 22:57:33.635908  553854 start.go:284] selected driver: docker
	I0512 22:57:33.635929  553854 start.go:806] validating driver "docker" against &{Name:functional-20220512225541-516044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220512225541-516044 Namespac
e:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false reg
istry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0512 22:57:33.636052  553854 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0512 22:57:33.638455  553854 out.go:177] 
	W0512 22:57:33.639799  553854 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0512 22:57:33.641152  553854 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:986: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220512225541-516044 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1015: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220512225541-516044 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1015: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220512225541-516044 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (263.81686ms)

                                                
                                                
-- stdout --
	* [functional-20220512225541-516044] minikube v1.25.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0512 22:57:22.000014  551351 out.go:296] Setting OutFile to fd 1 ...
	I0512 22:57:22.000144  551351 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 22:57:22.000160  551351 out.go:309] Setting ErrFile to fd 2...
	I0512 22:57:22.000173  551351 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 22:57:22.000474  551351 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/bin
	I0512 22:57:22.001001  551351 out.go:303] Setting JSON to false
	I0512 22:57:22.002400  551351 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":20398,"bootTime":1652375844,"procs":452,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0512 22:57:22.002476  551351 start.go:125] virtualization: kvm guest
	I0512 22:57:22.005347  551351 out.go:177] * [functional-20220512225541-516044] minikube v1.25.2 sur Ubuntu 20.04 (kvm/amd64)
	I0512 22:57:22.006925  551351 out.go:177]   - MINIKUBE_LOCATION=12739
	I0512 22:57:22.008396  551351 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0512 22:57:22.009909  551351 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/kubeconfig
	I0512 22:57:22.011474  551351 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube
	I0512 22:57:22.013019  551351 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0512 22:57:22.014936  551351 config.go:178] Loaded profile config "functional-20220512225541-516044": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 22:57:22.015566  551351 driver.go:358] Setting default libvirt URI to qemu:///system
	I0512 22:57:22.060942  551351 docker.go:137] docker version: linux-20.10.16
	I0512 22:57:22.061072  551351 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 22:57:22.180526  551351 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:94 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:41 SystemTime:2022-05-12 22:57:22.096409418 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0512 22:57:22.180630  551351 docker.go:254] overlay module found
	I0512 22:57:22.182763  551351 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0512 22:57:22.183992  551351 start.go:284] selected driver: docker
	I0512 22:57:22.184012  551351 start.go:806] validating driver "docker" against &{Name:functional-20220512225541-516044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652309540-13791@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220512225541-516044 Namespac
e:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false reg
istry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0512 22:57:22.184137  551351 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0512 22:57:22.186261  551351 out.go:177] 
	W0512 22:57:22.187593  551351 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0512 22:57:22.188981  551351 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:849: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:855: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:867: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (12.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1435: (dbg) Run:  kubectl --context functional-20220512225541-516044 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-20220512225541-516044 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-54fbb85-znpxh" [fa6e7b4b-e373-49a9-a4f8-c029196ac769] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-54fbb85-znpxh" [fa6e7b4b-e373-49a9-a4f8-c029196ac769] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 9.00708917s
functional_test.go:1451: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 service list
functional_test.go:1451: (dbg) Done: out/minikube-linux-amd64 -p functional-20220512225541-516044 service list: (1.777011986s)
functional_test.go:1465: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 service --namespace=default --https --url hello-node
functional_test.go:1478: found endpoint: https://192.168.49.2:31414
functional_test.go:1493: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 service hello-node --url --format={{.IP}}
functional_test.go:1507: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 service hello-node --url
functional_test.go:1513: found endpoint for hello-node: http://192.168.49.2:31414
--- PASS: TestFunctional/parallel/ServiceCmd (12.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1561: (dbg) Run:  kubectl --context functional-20220512225541-516044 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1567: (dbg) Run:  kubectl --context functional-20220512225541-516044 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1572: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-74cf8bc446-swdhg" [2ea86457-a279-453a-b65b-5d178bcde449] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-74cf8bc446-swdhg" [2ea86457-a279-453a-b65b-5d178bcde449] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1572: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.006423148s
functional_test.go:1581: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 service hello-node-connect --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1587: found endpoint for hello-node-connect: http://192.168.49.2:30610
functional_test.go:1607: http://192.168.49.2:30610: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-74cf8bc446-swdhg

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=172.17.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30610
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (13.67s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1622: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 addons list

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1634: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (36.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [4ba26bb6-088a-4bf0-90f2-cbd6a9f73852] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.045308773s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20220512225541-516044 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20220512225541-516044 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220512225541-516044 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220512225541-516044 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220512225541-516044 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [c72b8cae-0e9a-4573-800c-6dd2489ddcb2] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [c72b8cae-0e9a-4573-800c-6dd2489ddcb2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [c72b8cae-0e9a-4573-800c-6dd2489ddcb2] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.042656939s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20220512225541-516044 exec sp-pod -- touch /tmp/mount/foo

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20220512225541-516044 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20220512225541-516044 delete -f testdata/storage-provisioner/pod.yaml: (3.326337823s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220512225541-516044 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [b557ae96-a0bf-42d5-9c9f-8f79bbb58898] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [b557ae96-a0bf-42d5-9c9f-8f79bbb58898] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [b557ae96-a0bf-42d5-9c9f-8f79bbb58898] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.007395166s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20220512225541-516044 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (36.56s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1657: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1674: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 ssh -n functional-20220512225541-516044 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 cp functional-20220512225541-516044:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1654057691/001/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 ssh -n functional-20220512225541-516044 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1722: (dbg) Run:  kubectl --context functional-20220512225541-516044 replace --force -f testdata/mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1728: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-b87c45988-kskk9" [ef0cd57a-b1af-4d4e-9bc5-7b0664c77ef2] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-b87c45988-kskk9" [ef0cd57a-b1af-4d4e-9bc5-7b0664c77ef2] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1728: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.060004024s
functional_test.go:1736: (dbg) Run:  kubectl --context functional-20220512225541-516044 exec mysql-b87c45988-kskk9 -- mysql -ppassword -e "show databases;"
functional_test.go:1736: (dbg) Non-zero exit: kubectl --context functional-20220512225541-516044 exec mysql-b87c45988-kskk9 -- mysql -ppassword -e "show databases;": exit status 1 (319.655864ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1736: (dbg) Run:  kubectl --context functional-20220512225541-516044 exec mysql-b87c45988-kskk9 -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1736: (dbg) Non-zero exit: kubectl --context functional-20220512225541-516044 exec mysql-b87c45988-kskk9 -- mysql -ppassword -e "show databases;": exit status 1 (378.942882ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1736: (dbg) Run:  kubectl --context functional-20220512225541-516044 exec mysql-b87c45988-kskk9 -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1736: (dbg) Non-zero exit: kubectl --context functional-20220512225541-516044 exec mysql-b87c45988-kskk9 -- mysql -ppassword -e "show databases;": exit status 1 (274.597259ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1736: (dbg) Run:  kubectl --context functional-20220512225541-516044 exec mysql-b87c45988-kskk9 -- mysql -ppassword -e "show databases;"
2022/05/12 22:57:48 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MySQL (24.66s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1858: Checking for existence of /etc/test/nested/copy/516044/hosts within VM
functional_test.go:1860: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 ssh "sudo cat /etc/test/nested/copy/516044/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1865: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1901: Checking for existence of /etc/ssl/certs/516044.pem within VM
functional_test.go:1902: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 ssh "sudo cat /etc/ssl/certs/516044.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1901: Checking for existence of /usr/share/ca-certificates/516044.pem within VM
functional_test.go:1902: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 ssh "sudo cat /usr/share/ca-certificates/516044.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1901: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1902: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1928: Checking for existence of /etc/ssl/certs/5160442.pem within VM
functional_test.go:1929: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 ssh "sudo cat /etc/ssl/certs/5160442.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1928: Checking for existence of /usr/share/ca-certificates/5160442.pem within VM
functional_test.go:1929: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 ssh "sudo cat /usr/share/ca-certificates/5160442.pem"
functional_test.go:1928: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1929: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.74s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20220512225541-516044 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1956: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 ssh "sudo systemctl is-active crio"
functional_test.go:1956: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220512225541-516044 ssh "sudo systemctl is-active crio": exit status 1 (361.462175ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1268: (dbg) Run:  out/minikube-linux-amd64 profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1273: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20220512225541-516044 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20220512225541-516044 apply -f testdata/testsvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [4df62a13-70b8-49e1-8133-1ee26657e8c3] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [4df62a13-70b8-49e1-8133-1ee26657e8c3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [4df62a13-70b8-49e1-8133-1ee26657e8c3] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.014530674s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1308: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1313: Took "492.108488ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1322: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1327: Took "66.034207ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1359: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1364: Took "394.181581ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1372: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1377: Took "71.845047ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:494: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20220512225541-516044 docker-env) && out/minikube-linux-amd64 status -p functional-20220512225541-516044"
functional_test.go:517: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20220512225541-516044 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2185: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2199: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2199: (dbg) Done: out/minikube-linux-amd64 -p functional-20220512225541-516044 version -o=json --components: (2.279581539s)
--- PASS: TestFunctional/parallel/Version/components (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 image ls --format short
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220512225541-516044 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.23.5
k8s.gcr.io/kube-proxy:v1.23.5
k8s.gcr.io/kube-controller-manager:v1.23.5
k8s.gcr.io/kube-apiserver:v1.23.5
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20220512225541-516044
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20220512225541-516044
docker.io/kubernetesui/dashboard:<none>
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 image ls --format table

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220512225541-516044 image ls --format table:
|---------------------------------------------|----------------------------------|---------------|--------|
|                    Image                    |               Tag                |   Image ID    |  Size  |
|---------------------------------------------|----------------------------------|---------------|--------|
| k8s.gcr.io/etcd                             | 3.5.1-0                          | 25f8c7f3da61c | 293MB  |
| k8s.gcr.io/pause                            | 3.6                              | 6270bb605e12e | 683kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>                           | 7801cfc6d5c07 | 34.4MB |
| k8s.gcr.io/pause                            | 3.1                              | da86e6ba6ca19 | 742kB  |
| docker.io/library/nginx                     | latest                           | 7425d3a7c478e | 142MB  |
| docker.io/library/mysql                     | 5.7                              | a3d35804fa376 | 462MB  |
| k8s.gcr.io/kube-proxy                       | v1.23.5                          | 3c53fa8541f95 | 112MB  |
| k8s.gcr.io/kube-scheduler                   | v1.23.5                          | 884d49d6d8c9f | 53.5MB |
| k8s.gcr.io/pause                            | 3.3                              | 0184c1613d929 | 683kB  |
| k8s.gcr.io/kube-apiserver                   | v1.23.5                          | 3fc1d62d65872 | 135MB  |
| k8s.gcr.io/kube-controller-manager          | v1.23.5                          | b0c9e5e4dbb14 | 125MB  |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                           | a4ca41631cc7a | 46.8MB |
| gcr.io/google-containers/addon-resizer      | functional-20220512225541-516044 | ffd4cfbbe753e | 32.9MB |
| docker.io/library/nginx                     | alpine                           | 51696c87e77e4 | 23.4MB |
| docker.io/kubernetesui/dashboard            | <none>                           | 7fff914c4a615 | 243MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                     | 56cc512116c8f | 4.4MB  |
| k8s.gcr.io/pause                            | latest                           | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-20220512225541-516044 | 51959f54d3b2f | 30B    |
| gcr.io/k8s-minikube/storage-provisioner     | v5                               | 6e38f40d628db | 31.5MB |
| k8s.gcr.io/echoserver                       | 1.8                              | 82e4c8a736a4f | 95.4MB |
|---------------------------------------------|----------------------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 image ls --format json

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220512225541-516044 image ls --format json:
[{"id":"a3d35804fa376a141b9a9dad8f5534c3179f4c328d6efc67c5c5145d257c291a","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"462000000"},{"id":"3c53fa8541f95165d3def81704febb85e2e13f90872667f9939dd856dc88e874","repoDigests":[],"repoTags":["k8s.gcr.io/kube-proxy:v1.23.5"],"size":"112000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"51696c87e77e4ff7a53af9be837f35d4eacdb47b4ca83ba5fd5e4b5101d98502","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23400000"},{"id":"7fff914c4a615552dde44bde1183cdaf1656495d54327823c37e897e6c999fe8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"243000000"},{"id":"25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d","repoDigests":[
],"repoTags":["k8s.gcr.io/etcd:3.5.1-0"],"size":"293000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"51959f54d3b2f0d66170064d68efe97e1895f227a38239dbe14ae93dc693df40","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220512225541-516044"],"size":"30"},{"id":"7425d3a7c478efbeb75f0937060117343a9a510f72f5f7ad9f14b1501a36940c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"3fc1d62d65872296462b198ab7842d0faf8c336b236c4a0dacfce67bec95257f","repoDigests":[],"repoTags":["k8s.gcr.io/kube-apiserver:v1.23.5"]
,"size":"135000000"},{"id":"b0c9e5e4dbb14459edc593b39add54f5497e42d4eecc8d03bee5daf9537b0dae","repoDigests":[],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.23.5"],"size":"125000000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"884d49d6d8c9f40672d20c78e300ffee238d01c1ccb2c132937125d97a596fd7","repoDigests":[],"repoTags":["k8s.gcr.io/kube-scheduler:v1.23.5"],"size":"53500000"},{"id":"a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":[],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"46800000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220512225541-516044"],"size":"32900000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 image ls --format yaml
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220512225541-516044 image ls --format yaml:
- id: 51696c87e77e4ff7a53af9be837f35d4eacdb47b4ca83ba5fd5e4b5101d98502
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23400000"
- id: 3fc1d62d65872296462b198ab7842d0faf8c336b236c4a0dacfce67bec95257f
repoDigests: []
repoTags:
- k8s.gcr.io/kube-apiserver:v1.23.5
size: "135000000"
- id: 3c53fa8541f95165d3def81704febb85e2e13f90872667f9939dd856dc88e874
repoDigests: []
repoTags:
- k8s.gcr.io/kube-proxy:v1.23.5
size: "112000000"
- id: b0c9e5e4dbb14459edc593b39add54f5497e42d4eecc8d03bee5daf9537b0dae
repoDigests: []
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.23.5
size: "125000000"
- id: 51959f54d3b2f0d66170064d68efe97e1895f227a38239dbe14ae93dc693df40
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220512225541-516044
size: "30"
- id: a3d35804fa376a141b9a9dad8f5534c3179f4c328d6efc67c5c5145d257c291a
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "462000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 7425d3a7c478efbeb75f0937060117343a9a510f72f5f7ad9f14b1501a36940c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220512225541-516044
size: "32900000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d
repoDigests: []
repoTags:
- k8s.gcr.io/etcd:3.5.1-0
size: "293000000"
- id: 7fff914c4a615552dde44bde1183cdaf1656495d54327823c37e897e6c999fe8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "243000000"
- id: a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests: []
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "46800000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 884d49d6d8c9f40672d20c78e300ffee238d01c1ccb2c132937125d97a596fd7
repoDigests: []
repoTags:
- k8s.gcr.io/kube-scheduler:v1.23.5
size: "53500000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220512225541-516044 ssh pgrep buildkitd: exit status 1 (497.548407ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 image build -t localhost/my-image:functional-20220512225541-516044 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p functional-20220512225541-516044 image build -t localhost/my-image:functional-20220512225541-516044 testdata/build: (2.538050974s)
functional_test.go:315: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220512225541-516044 image build -t localhost/my-image:functional-20220512225541-516044 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 495e80a4ae49
Removing intermediate container 495e80a4ae49
---> a3ead5fa71e0
Step 3/3 : ADD content.txt /
---> 317ee6625b32
Successfully built 317ee6625b32
Successfully tagged localhost/my-image:functional-20220512225541-516044
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:337: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.153026577s)
functional_test.go:342: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220512225541-516044
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220512225541-516044

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Done: out/minikube-linux-amd64 -p functional-20220512225541-516044 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220512225541-516044: (3.809535101s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220512225541-516044

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Done: out/minikube-linux-amd64 -p functional-20220512225541-516044 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220512225541-516044: (2.616441476s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.98s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220512225541-516044 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://10.110.76.185 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-linux-amd64 -p functional-20220512225541-516044 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2048: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2048: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2048: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220512225541-516044
functional_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220512225541-516044

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:240: (dbg) Done: out/minikube-linux-amd64 -p functional-20220512225541-516044 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220512225541-516044: (4.188949126s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.93s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (17.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220512225541-516044 /tmp/TestFunctionalparallelMountCmdany-port3150281359/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1652396242194191147" to /tmp/TestFunctionalparallelMountCmdany-port3150281359/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1652396242194191147" to /tmp/TestFunctionalparallelMountCmdany-port3150281359/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1652396242194191147" to /tmp/TestFunctionalparallelMountCmdany-port3150281359/001/test-1652396242194191147
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220512225541-516044 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (456.903392ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 May 12 22:57 created-by-test
-rw-r--r-- 1 docker docker 24 May 12 22:57 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 May 12 22:57 test-1652396242194191147
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 ssh cat /mount-9p/test-1652396242194191147

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-20220512225541-516044 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [1c4a1989-a1ce-4824-a033-c2636a7f518c] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [1c4a1989-a1ce-4824-a033-c2636a7f518c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [1c4a1989-a1ce-4824-a033-c2636a7f518c] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 14.006485903s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-20220512225541-516044 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 ssh stat /mount-9p/created-by-test

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220512225541-516044 /tmp/TestFunctionalparallelMountCmdany-port3150281359/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (17.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 image save gcr.io/google-containers/addon-resizer:functional-20220512225541-516044 /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 image rm gcr.io/google-containers/addon-resizer:functional-20220512225541-516044

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 image load /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220512225541-516044
functional_test.go:419: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220512225541-516044

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Done: out/minikube-linux-amd64 -p functional-20220512225541-516044 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220512225541-516044: (3.856425924s)
functional_test.go:424: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220512225541-516044
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.92s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220512225541-516044 /tmp/TestFunctionalparallelMountCmdspecific-port4126387673/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220512225541-516044 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (387.693963ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 ssh -- ls -la /mount-9p
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220512225541-516044 /tmp/TestFunctionalparallelMountCmdspecific-port4126387673/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220512225541-516044 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220512225541-516044 ssh "sudo umount -f /mount-9p": exit status 1 (494.651212ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-linux-amd64 -p functional-20220512225541-516044 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220512225541-516044 /tmp/TestFunctionalparallelMountCmdspecific-port4126387673/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.38s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220512225541-516044
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220512225541-516044
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220512225541-516044
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (61.39s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-20220512225758-516044 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0512 22:58:23.272949  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/addons-20220512225124-516044/client.crt: no such file or directory
E0512 22:58:23.278534  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/addons-20220512225124-516044/client.crt: no such file or directory
E0512 22:58:23.288836  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/addons-20220512225124-516044/client.crt: no such file or directory
E0512 22:58:23.309114  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/addons-20220512225124-516044/client.crt: no such file or directory
E0512 22:58:23.349562  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/addons-20220512225124-516044/client.crt: no such file or directory
E0512 22:58:23.429866  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/addons-20220512225124-516044/client.crt: no such file or directory
E0512 22:58:23.590264  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/addons-20220512225124-516044/client.crt: no such file or directory
E0512 22:58:23.910811  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/addons-20220512225124-516044/client.crt: no such file or directory
E0512 22:58:24.551729  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/addons-20220512225124-516044/client.crt: no such file or directory
E0512 22:58:25.832359  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/addons-20220512225124-516044/client.crt: no such file or directory
E0512 22:58:28.393098  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/addons-20220512225124-516044/client.crt: no such file or directory
E0512 22:58:33.513976  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/addons-20220512225124-516044/client.crt: no such file or directory
E0512 22:58:43.755054  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/addons-20220512225124-516044/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-20220512225758-516044 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (1m1.387015202s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (61.39s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.66s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220512225758-516044 addons enable ingress --alsologtostderr -v=5
E0512 22:59:04.235911  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/addons-20220512225124-516044/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220512225758-516044 addons enable ingress --alsologtostderr -v=5: (11.664277405s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.66s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.38s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220512225758-516044 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.38s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (35.35s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:162: (dbg) Run:  kubectl --context ingress-addon-legacy-20220512225758-516044 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:162: (dbg) Done: kubectl --context ingress-addon-legacy-20220512225758-516044 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.229627247s)
addons_test.go:182: (dbg) Run:  kubectl --context ingress-addon-legacy-20220512225758-516044 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:195: (dbg) Run:  kubectl --context ingress-addon-legacy-20220512225758-516044 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:200: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [a38859b3-55cc-4527-8425-b03e8a97fb67] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [a38859b3-55cc-4527-8425-b03e8a97fb67] Running
addons_test.go:200: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.007595989s
addons_test.go:212: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220512225758-516044 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:236: (dbg) Run:  kubectl --context ingress-addon-legacy-20220512225758-516044 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220512225758-516044 ip
addons_test.go:247: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220512225758-516044 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220512225758-516044 addons disable ingress-dns --alsologtostderr -v=1: (2.496541461s)
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220512225758-516044 addons disable ingress --alsologtostderr -v=1
E0512 22:59:45.197269  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/addons-20220512225124-516044/client.crt: no such file or directory
addons_test.go:261: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220512225758-516044 addons disable ingress --alsologtostderr -v=1: (7.28368225s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (35.35s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.35s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-20220512225949-516044 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-20220512225949-516044 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (40.353128932s)
--- PASS: TestJSONOutput/start/Command (40.35s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-20220512225949-516044 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-20220512225949-516044 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-20220512225949-516044 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-20220512225949-516044 --output=json --user=testUser: (10.845333855s)
--- PASS: TestJSONOutput/stop/Command (10.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.29s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20220512230044-516044 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20220512230044-516044 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (83.60895ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"123e4b9a-de71-45de-baa2-4ca8376e67a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220512230044-516044] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"03217155-0787-4288-bc40-86f2b7fa0e60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=12739"}}
	{"specversion":"1.0","id":"849c1cea-af72-4ac9-b3ca-93d7a9338f36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0c30bfb1-08fc-4764-abf7-061ede2f57cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/kubeconfig"}}
	{"specversion":"1.0","id":"b4b5719b-69f9-4b2c-b509-f6f8d0131c7c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube"}}
	{"specversion":"1.0","id":"cca6a02e-db01-43fd-840e-628c4271c605","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9b7d7fd6-4a45-452a-afc2-2cae3330a899","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20220512230044-516044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20220512230044-516044
--- PASS: TestErrorJSONOutput (0.29s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (26.78s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220512230044-516044 --network=
E0512 23:01:07.118194  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/addons-20220512225124-516044/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220512230044-516044 --network=: (24.478370393s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220512230044-516044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220512230044-516044
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220512230044-516044: (2.270271966s)
--- PASS: TestKicCustomNetwork/create_custom_network (26.78s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.89s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220512230111-516044 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220512230111-516044 --network=bridge: (24.814502997s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220512230111-516044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220512230111-516044
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220512230111-516044: (2.039735129s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.89s)

                                                
                                    
x
+
TestKicExistingNetwork (26.89s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-20220512230138-516044 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-20220512230138-516044 --network=existing-network: (24.433176821s)
helpers_test.go:175: Cleaning up "existing-network-20220512230138-516044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-20220512230138-516044
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-20220512230138-516044: (2.241242363s)
--- PASS: TestKicExistingNetwork (26.89s)

                                                
                                    
x
+
TestKicCustomSubnet (27.28s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-20220512230205-516044 --subnet=192.168.60.0/24
E0512 23:02:07.583477  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/functional-20220512225541-516044/client.crt: no such file or directory
E0512 23:02:07.588783  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/functional-20220512225541-516044/client.crt: no such file or directory
E0512 23:02:07.599070  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/functional-20220512225541-516044/client.crt: no such file or directory
E0512 23:02:07.619381  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/functional-20220512225541-516044/client.crt: no such file or directory
E0512 23:02:07.659766  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/functional-20220512225541-516044/client.crt: no such file or directory
E0512 23:02:07.740183  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/functional-20220512225541-516044/client.crt: no such file or directory
E0512 23:02:07.900615  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/functional-20220512225541-516044/client.crt: no such file or directory
E0512 23:02:08.221198  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/functional-20220512225541-516044/client.crt: no such file or directory
E0512 23:02:08.862147  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/functional-20220512225541-516044/client.crt: no such file or directory
E0512 23:02:10.142700  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/functional-20220512225541-516044/client.crt: no such file or directory
E0512 23:02:12.703611  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/functional-20220512225541-516044/client.crt: no such file or directory
E0512 23:02:17.824754  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/functional-20220512225541-516044/client.crt: no such file or directory
E0512 23:02:28.065023  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/functional-20220512225541-516044/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-20220512230205-516044 --subnet=192.168.60.0/24: (25.035630421s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-20220512230205-516044 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-20220512230205-516044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-20220512230205-516044
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-20220512230205-516044: (2.21061299s)
--- PASS: TestKicCustomSubnet (27.28s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.48s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-20220512230232-516044 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-20220512230232-516044 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (4.478198998s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-20220512230232-516044 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.87s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220512230232-516044 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220512230232-516044 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (4.866039675s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220512230232-516044 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-20220512230232-516044 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-20220512230232-516044 --alsologtostderr -v=5: (1.719706265s)
--- PASS: TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220512230232-516044 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.35s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-20220512230232-516044
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-20220512230232-516044: (1.269290594s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.97s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220512230232-516044
E0512 23:02:48.545918  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/functional-20220512225541-516044/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220512230232-516044: (5.967246366s)
--- PASS: TestMountStart/serial/RestartStopped (6.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220512230232-516044 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (71.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220512230257-516044 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0512 23:03:23.273316  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/addons-20220512225124-516044/client.crt: no such file or directory
E0512 23:03:29.506645  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/functional-20220512225541-516044/client.crt: no such file or directory
E0512 23:03:50.958768  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/addons-20220512225124-516044/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220512230257-516044 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m11.222726221s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (71.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220512230257-516044 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220512230257-516044 -- rollout status deployment/busybox
E0512 23:04:11.887004  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/ingress-addon-legacy-20220512225758-516044/client.crt: no such file or directory
E0512 23:04:11.892269  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/ingress-addon-legacy-20220512225758-516044/client.crt: no such file or directory
E0512 23:04:11.902509  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/ingress-addon-legacy-20220512225758-516044/client.crt: no such file or directory
E0512 23:04:11.922770  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/ingress-addon-legacy-20220512225758-516044/client.crt: no such file or directory
E0512 23:04:11.963041  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/ingress-addon-legacy-20220512225758-516044/client.crt: no such file or directory
E0512 23:04:12.043391  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/ingress-addon-legacy-20220512225758-516044/client.crt: no such file or directory
E0512 23:04:12.203781  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/ingress-addon-legacy-20220512225758-516044/client.crt: no such file or directory
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20220512230257-516044 -- rollout status deployment/busybox: (3.272680491s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220512230257-516044 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220512230257-516044 -- get pods -o jsonpath='{.items[*].metadata.name}'
E0512 23:04:12.524302  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/ingress-addon-legacy-20220512225758-516044/client.crt: no such file or directory
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220512230257-516044 -- exec busybox-7978565885-b46w5 -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220512230257-516044 -- exec busybox-7978565885-jhpzn -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220512230257-516044 -- exec busybox-7978565885-b46w5 -- nslookup kubernetes.default
E0512 23:04:13.164667  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/ingress-addon-legacy-20220512225758-516044/client.crt: no such file or directory
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220512230257-516044 -- exec busybox-7978565885-jhpzn -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220512230257-516044 -- exec busybox-7978565885-b46w5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220512230257-516044 -- exec busybox-7978565885-jhpzn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.93s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220512230257-516044 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220512230257-516044 -- exec busybox-7978565885-b46w5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220512230257-516044 -- exec busybox-7978565885-b46w5 -- sh -c "ping -c 1 192.168.49.1"
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220512230257-516044 -- exec busybox-7978565885-jhpzn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E0512 23:04:14.444995  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/ingress-addon-legacy-20220512225758-516044/client.crt: no such file or directory
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220512230257-516044 -- exec busybox-7978565885-jhpzn -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (26.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220512230257-516044 -v 3 --alsologtostderr
E0512 23:04:17.005219  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/ingress-addon-legacy-20220512225758-516044/client.crt: no such file or directory
E0512 23:04:22.125804  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/ingress-addon-legacy-20220512225758-516044/client.crt: no such file or directory
E0512 23:04:32.366957  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/ingress-addon-legacy-20220512225758-516044/client.crt: no such file or directory
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20220512230257-516044 -v 3 --alsologtostderr: (25.584697249s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (26.37s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (12.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 status --output json --alsologtostderr
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 cp testdata/cp-test.txt multinode-20220512230257-516044:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 ssh -n multinode-20220512230257-516044 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 cp multinode-20220512230257-516044:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3577120662/001/cp-test_multinode-20220512230257-516044.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 ssh -n multinode-20220512230257-516044 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 cp multinode-20220512230257-516044:/home/docker/cp-test.txt multinode-20220512230257-516044-m02:/home/docker/cp-test_multinode-20220512230257-516044_multinode-20220512230257-516044-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 ssh -n multinode-20220512230257-516044 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 ssh -n multinode-20220512230257-516044-m02 "sudo cat /home/docker/cp-test_multinode-20220512230257-516044_multinode-20220512230257-516044-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 cp multinode-20220512230257-516044:/home/docker/cp-test.txt multinode-20220512230257-516044-m03:/home/docker/cp-test_multinode-20220512230257-516044_multinode-20220512230257-516044-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 ssh -n multinode-20220512230257-516044 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 ssh -n multinode-20220512230257-516044-m03 "sudo cat /home/docker/cp-test_multinode-20220512230257-516044_multinode-20220512230257-516044-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 cp testdata/cp-test.txt multinode-20220512230257-516044-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 ssh -n multinode-20220512230257-516044-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 cp multinode-20220512230257-516044-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3577120662/001/cp-test_multinode-20220512230257-516044-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 ssh -n multinode-20220512230257-516044-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 cp multinode-20220512230257-516044-m02:/home/docker/cp-test.txt multinode-20220512230257-516044:/home/docker/cp-test_multinode-20220512230257-516044-m02_multinode-20220512230257-516044.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 ssh -n multinode-20220512230257-516044-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 ssh -n multinode-20220512230257-516044 "sudo cat /home/docker/cp-test_multinode-20220512230257-516044-m02_multinode-20220512230257-516044.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 cp multinode-20220512230257-516044-m02:/home/docker/cp-test.txt multinode-20220512230257-516044-m03:/home/docker/cp-test_multinode-20220512230257-516044-m02_multinode-20220512230257-516044-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 ssh -n multinode-20220512230257-516044-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 ssh -n multinode-20220512230257-516044-m03 "sudo cat /home/docker/cp-test_multinode-20220512230257-516044-m02_multinode-20220512230257-516044-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 cp testdata/cp-test.txt multinode-20220512230257-516044-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 ssh -n multinode-20220512230257-516044-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 cp multinode-20220512230257-516044-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3577120662/001/cp-test_multinode-20220512230257-516044-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 ssh -n multinode-20220512230257-516044-m03 "sudo cat /home/docker/cp-test.txt"
E0512 23:04:51.427457  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/functional-20220512225541-516044/client.crt: no such file or directory
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 cp multinode-20220512230257-516044-m03:/home/docker/cp-test.txt multinode-20220512230257-516044:/home/docker/cp-test_multinode-20220512230257-516044-m03_multinode-20220512230257-516044.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 ssh -n multinode-20220512230257-516044-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 ssh -n multinode-20220512230257-516044 "sudo cat /home/docker/cp-test_multinode-20220512230257-516044-m03_multinode-20220512230257-516044.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 cp multinode-20220512230257-516044-m03:/home/docker/cp-test.txt multinode-20220512230257-516044-m02:/home/docker/cp-test_multinode-20220512230257-516044-m03_multinode-20220512230257-516044-m02.txt
E0512 23:04:52.847842  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/ingress-addon-legacy-20220512225758-516044/client.crt: no such file or directory
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 ssh -n multinode-20220512230257-516044-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 ssh -n multinode-20220512230257-516044-m02 "sudo cat /home/docker/cp-test_multinode-20220512230257-516044-m03_multinode-20220512230257-516044-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (12.52s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220512230257-516044 node stop m03: (1.278716643s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220512230257-516044 status: exit status 7 (628.760644ms)

                                                
                                                
-- stdout --
	multinode-20220512230257-516044
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220512230257-516044-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220512230257-516044-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220512230257-516044 status --alsologtostderr: exit status 7 (628.95823ms)

                                                
                                                
-- stdout --
	multinode-20220512230257-516044
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220512230257-516044-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220512230257-516044-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0512 23:04:55.886317  609118 out.go:296] Setting OutFile to fd 1 ...
	I0512 23:04:55.886497  609118 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 23:04:55.886508  609118 out.go:309] Setting ErrFile to fd 2...
	I0512 23:04:55.886513  609118 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 23:04:55.886628  609118 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/bin
	I0512 23:04:55.886801  609118 out.go:303] Setting JSON to false
	I0512 23:04:55.886829  609118 mustload.go:65] Loading cluster: multinode-20220512230257-516044
	I0512 23:04:55.887140  609118 config.go:178] Loaded profile config "multinode-20220512230257-516044": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 23:04:55.887156  609118 status.go:253] checking status of multinode-20220512230257-516044 ...
	I0512 23:04:55.887502  609118 cli_runner.go:164] Run: docker container inspect multinode-20220512230257-516044 --format={{.State.Status}}
	I0512 23:04:55.920008  609118 status.go:328] multinode-20220512230257-516044 host status = "Running" (err=<nil>)
	I0512 23:04:55.920037  609118 host.go:66] Checking if "multinode-20220512230257-516044" exists ...
	I0512 23:04:55.920300  609118 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220512230257-516044
	I0512 23:04:55.951433  609118 host.go:66] Checking if "multinode-20220512230257-516044" exists ...
	I0512 23:04:55.951775  609118 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0512 23:04:55.951838  609118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220512230257-516044
	I0512 23:04:55.982670  609118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49217 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/multinode-20220512230257-516044/id_rsa Username:docker}
	I0512 23:04:56.073816  609118 ssh_runner.go:195] Run: systemctl --version
	I0512 23:04:56.077481  609118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 23:04:56.086521  609118 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 23:04:56.193402  609118 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:93 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-12 23:04:56.116800212 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0512 23:04:56.193921  609118 kubeconfig.go:92] found "multinode-20220512230257-516044" server: "https://192.168.49.2:8443"
	I0512 23:04:56.193944  609118 api_server.go:165] Checking apiserver status ...
	I0512 23:04:56.193973  609118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 23:04:56.203918  609118 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1735/cgroup
	I0512 23:04:56.211133  609118 api_server.go:181] apiserver freezer: "12:freezer:/docker/b2e96369f1c25e2b592a8a68cff2ed10dc2bccc1bab0044fba7cf8c724ce1a1c/kubepods/burstable/pode7ba74d908afffb3bea46caab4520288/931b2a284bcf5d5b52b053c6c9d0bef934288f733bddb75c4c1fffd5e2ab3afe"
	I0512 23:04:56.211184  609118 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b2e96369f1c25e2b592a8a68cff2ed10dc2bccc1bab0044fba7cf8c724ce1a1c/kubepods/burstable/pode7ba74d908afffb3bea46caab4520288/931b2a284bcf5d5b52b053c6c9d0bef934288f733bddb75c4c1fffd5e2ab3afe/freezer.state
	I0512 23:04:56.217677  609118 api_server.go:203] freezer state: "THAWED"
	I0512 23:04:56.217703  609118 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0512 23:04:56.222667  609118 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0512 23:04:56.222692  609118 status.go:419] multinode-20220512230257-516044 apiserver status = Running (err=<nil>)
	I0512 23:04:56.222717  609118 status.go:255] multinode-20220512230257-516044 status: &{Name:multinode-20220512230257-516044 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0512 23:04:56.222743  609118 status.go:253] checking status of multinode-20220512230257-516044-m02 ...
	I0512 23:04:56.222990  609118 cli_runner.go:164] Run: docker container inspect multinode-20220512230257-516044-m02 --format={{.State.Status}}
	I0512 23:04:56.256225  609118 status.go:328] multinode-20220512230257-516044-m02 host status = "Running" (err=<nil>)
	I0512 23:04:56.256254  609118 host.go:66] Checking if "multinode-20220512230257-516044-m02" exists ...
	I0512 23:04:56.256536  609118 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220512230257-516044-m02
	I0512 23:04:56.287530  609118 host.go:66] Checking if "multinode-20220512230257-516044-m02" exists ...
	I0512 23:04:56.287795  609118 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0512 23:04:56.287830  609118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220512230257-516044-m02
	I0512 23:04:56.319675  609118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49222 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/machines/multinode-20220512230257-516044-m02/id_rsa Username:docker}
	I0512 23:04:56.410004  609118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 23:04:56.419470  609118 status.go:255] multinode-20220512230257-516044-m02 status: &{Name:multinode-20220512230257-516044-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0512 23:04:56.419527  609118 status.go:253] checking status of multinode-20220512230257-516044-m03 ...
	I0512 23:04:56.419768  609118 cli_runner.go:164] Run: docker container inspect multinode-20220512230257-516044-m03 --format={{.State.Status}}
	I0512 23:04:56.451456  609118 status.go:328] multinode-20220512230257-516044-m03 host status = "Stopped" (err=<nil>)
	I0512 23:04:56.451487  609118 status.go:341] host is not running, skipping remaining checks
	I0512 23:04:56.451495  609118 status.go:255] multinode-20220512230257-516044-m03 status: &{Name:multinode-20220512230257-516044-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.54s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (24.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220512230257-516044 node start m03 --alsologtostderr: (23.942673721s)
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (24.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (101.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220512230257-516044
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-20220512230257-516044
E0512 23:05:33.808879  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/ingress-addon-legacy-20220512225758-516044/client.crt: no such file or directory
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-20220512230257-516044: (22.635338438s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220512230257-516044 --wait=true -v=8 --alsologtostderr
E0512 23:06:55.729931  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/ingress-addon-legacy-20220512225758-516044/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220512230257-516044 --wait=true -v=8 --alsologtostderr: (1m18.846770476s)
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220512230257-516044
--- PASS: TestMultiNode/serial/RestartKeepsNodes (101.61s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220512230257-516044 node delete m03: (4.546845869s)
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 status --alsologtostderr
E0512 23:07:07.581687  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/functional-20220512225541-516044/client.crt: no such file or directory
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 stop
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220512230257-516044 stop: (21.485482384s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220512230257-516044 status: exit status 7 (127.26126ms)

                                                
                                                
-- stdout --
	multinode-20220512230257-516044
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220512230257-516044-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220512230257-516044 status --alsologtostderr: exit status 7 (126.052935ms)

                                                
                                                
-- stdout --
	multinode-20220512230257-516044
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220512230257-516044-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0512 23:07:29.818598  623584 out.go:296] Setting OutFile to fd 1 ...
	I0512 23:07:29.818775  623584 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 23:07:29.818785  623584 out.go:309] Setting ErrFile to fd 2...
	I0512 23:07:29.818789  623584 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 23:07:29.818906  623584 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/bin
	I0512 23:07:29.819069  623584 out.go:303] Setting JSON to false
	I0512 23:07:29.819089  623584 mustload.go:65] Loading cluster: multinode-20220512230257-516044
	I0512 23:07:29.819406  623584 config.go:178] Loaded profile config "multinode-20220512230257-516044": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 23:07:29.819422  623584 status.go:253] checking status of multinode-20220512230257-516044 ...
	I0512 23:07:29.819769  623584 cli_runner.go:164] Run: docker container inspect multinode-20220512230257-516044 --format={{.State.Status}}
	I0512 23:07:29.851329  623584 status.go:328] multinode-20220512230257-516044 host status = "Stopped" (err=<nil>)
	I0512 23:07:29.851355  623584 status.go:341] host is not running, skipping remaining checks
	I0512 23:07:29.851365  623584 status.go:255] multinode-20220512230257-516044 status: &{Name:multinode-20220512230257-516044 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0512 23:07:29.851400  623584 status.go:253] checking status of multinode-20220512230257-516044-m02 ...
	I0512 23:07:29.851631  623584 cli_runner.go:164] Run: docker container inspect multinode-20220512230257-516044-m02 --format={{.State.Status}}
	I0512 23:07:29.882042  623584 status.go:328] multinode-20220512230257-516044-m02 host status = "Stopped" (err=<nil>)
	I0512 23:07:29.882064  623584 status.go:341] host is not running, skipping remaining checks
	I0512 23:07:29.882070  623584 status.go:255] multinode-20220512230257-516044-m02 status: &{Name:multinode-20220512230257-516044-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.74s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (59.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220512230257-516044 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0512 23:07:35.268127  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/functional-20220512225541-516044/client.crt: no such file or directory
E0512 23:08:23.273448  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/addons-20220512225124-516044/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220512230257-516044 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (59.248136415s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220512230257-516044 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (59.98s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (27.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220512230257-516044
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220512230257-516044-m02 --driver=docker  --container-runtime=docker
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20220512230257-516044-m02 --driver=docker  --container-runtime=docker: exit status 14 (81.177945ms)

                                                
                                                
-- stdout --
	* [multinode-20220512230257-516044-m02] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220512230257-516044-m02' is duplicated with machine name 'multinode-20220512230257-516044-m02' in profile 'multinode-20220512230257-516044'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220512230257-516044-m03 --driver=docker  --container-runtime=docker
multinode_test.go:458: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220512230257-516044-m03 --driver=docker  --container-runtime=docker: (25.058189343s)
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220512230257-516044
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20220512230257-516044: exit status 80 (361.647892ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220512230257-516044
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220512230257-516044-m03 already exists in multinode-20220512230257-516044-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20220512230257-516044-m03
multinode_test.go:470: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20220512230257-516044-m03: (2.306671349s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (27.88s)

                                                
                                    
x
+
TestPreload (116.96s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220512230902-516044 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.0
E0512 23:09:11.887053  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/ingress-addon-legacy-20220512225758-516044/client.crt: no such file or directory
E0512 23:09:39.571030  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/ingress-addon-legacy-20220512225758-516044/client.crt: no such file or directory
preload_test.go:48: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220512230902-516044 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.0: (1m13.682077004s)
preload_test.go:61: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220512230902-516044 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:61: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20220512230902-516044 -- docker pull gcr.io/k8s-minikube/busybox: (1.899208652s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220512230902-516044 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.3
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220512230902-516044 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.3: (38.682845277s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220512230902-516044 -- docker images
helpers_test.go:175: Cleaning up "test-preload-20220512230902-516044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20220512230902-516044
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20220512230902-516044: (2.312256499s)
--- PASS: TestPreload (116.96s)

                                                
                                    
x
+
TestScheduledStopUnix (98.39s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20220512231059-516044 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20220512231059-516044 --memory=2048 --driver=docker  --container-runtime=docker: (24.757105302s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220512231059-516044 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220512231059-516044 -n scheduled-stop-20220512231059-516044
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220512231059-516044 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220512231059-516044 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220512231059-516044 -n scheduled-stop-20220512231059-516044
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220512231059-516044
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220512231059-516044 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0512 23:12:07.583093  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/functional-20220512225541-516044/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220512231059-516044
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-20220512231059-516044: exit status 7 (97.046757ms)

                                                
                                                
-- stdout --
	scheduled-stop-20220512231059-516044
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220512231059-516044 -n scheduled-stop-20220512231059-516044
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220512231059-516044 -n scheduled-stop-20220512231059-516044: exit status 7 (94.952969ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-20220512231059-516044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20220512231059-516044
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-20220512231059-516044: (1.824744723s)
--- PASS: TestScheduledStopUnix (98.39s)

                                                
                                    
x
+
TestSkaffold (56.72s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:56: (dbg) Run:  /tmp/skaffold.exe3198272337 version
skaffold_test.go:60: skaffold version: v1.38.0
skaffold_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-20220512231237-516044 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-20220512231237-516044 --memory=2600 --driver=docker  --container-runtime=docker: (25.08610586s)
skaffold_test.go:83: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:107: (dbg) Run:  /tmp/skaffold.exe3198272337 run --minikube-profile skaffold-20220512231237-516044 --kube-context skaffold-20220512231237-516044 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:107: (dbg) Done: /tmp/skaffold.exe3198272337 run --minikube-profile skaffold-20220512231237-516044 --kube-context skaffold-20220512231237-516044 --status-check=true --port-forward=false --interactive=false: (17.739127195s)
skaffold_test.go:113: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:342: "leeroy-app-b97587568-wqg68" [0c4cfce2-bc4d-4c22-8c0b-0c121f9a85da] Running
E0512 23:13:23.273479  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/addons-20220512225124-516044/client.crt: no such file or directory
skaffold_test.go:113: (dbg) TestSkaffold: app=leeroy-app healthy within 5.010753698s
skaffold_test.go:116: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:342: "leeroy-web-85c8ffddff-xkvdw" [fe36dfcd-fd59-447d-b228-da86169a4f2d] Running
skaffold_test.go:116: (dbg) TestSkaffold: app=leeroy-web healthy within 5.006000136s
helpers_test.go:175: Cleaning up "skaffold-20220512231237-516044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-20220512231237-516044
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-20220512231237-516044: (2.426255295s)
--- PASS: TestSkaffold (56.72s)

                                                
                                    
x
+
TestInsufficientStorage (13.32s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-20220512231334-516044 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-20220512231334-516044 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (10.767259894s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c77c5e15-c70b-4067-ba39-e7a11c5fccc3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220512231334-516044] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"22958baf-e892-4876-8220-d961598e1521","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=12739"}}
	{"specversion":"1.0","id":"65742685-f892-49a1-8323-7ee7327d75c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c72e0a16-c202-43d1-800f-960c0fa1cc73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/kubeconfig"}}
	{"specversion":"1.0","id":"e68fbf33-e591-4f0a-85cf-7156e711ec9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube"}}
	{"specversion":"1.0","id":"c24e3a91-2ef1-4fc6-a8e2-432cf123fee9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"c6417e84-4b01-4f5a-99af-cdc1a9f0cace","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"fd6c3260-6b6c-4b0a-a942-0ee06a992772","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"e511f277-ce49-4f4f-90b8-1ab86442880c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6252dbe3-141b-42a8-ac59-646e3e88f544","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with the root privilege"}}
	{"specversion":"1.0","id":"08b6539b-03cc-442f-b500-059edada5182","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220512231334-516044 in cluster insufficient-storage-20220512231334-516044","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"da3facd1-d096-4035-806e-8c6b8a526ac7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"4d66baa2-fd63-4e82-8e76-c3b44675cf9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"72a0061c-bf6a-4d79-97ac-1907d689d7d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220512231334-516044 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220512231334-516044 --output=json --layout=cluster: exit status 7 (363.891584ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220512231334-516044","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.25.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220512231334-516044","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0512 23:13:45.320072  656455 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220512231334-516044" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220512231334-516044 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220512231334-516044 --output=json --layout=cluster: exit status 7 (357.316674ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220512231334-516044","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.25.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220512231334-516044","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0512 23:13:45.678519  656566 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220512231334-516044" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/kubeconfig
	E0512 23:13:45.687080  656566 status.go:557] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/insufficient-storage-20220512231334-516044/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-20220512231334-516044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-20220512231334-516044
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-20220512231334-516044: (1.833297324s)
--- PASS: TestInsufficientStorage (13.32s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (60.14s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.9.0.198850472.exe start -p running-upgrade-20220512231638-516044 --memory=2200 --vm-driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /tmp/minikube-v1.9.0.198850472.exe start -p running-upgrade-20220512231638-516044 --memory=2200 --vm-driver=docker  --container-runtime=docker: (38.870690465s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20220512231638-516044 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-20220512231638-516044 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (18.318218102s)
helpers_test.go:175: Cleaning up "running-upgrade-20220512231638-516044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20220512231638-516044
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20220512231638-516044: (2.095518003s)
--- PASS: TestRunningBinaryUpgrade (60.14s)

                                                
                                    
x
+
TestKubernetesUpgrade (86.06s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220512231512-516044 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220512231512-516044 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (54.124964515s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220512231512-516044
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220512231512-516044: (1.450635358s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20220512231512-516044 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20220512231512-516044 status --format={{.Host}}: exit status 7 (148.366051ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220512231512-516044 --memory=2200 --kubernetes-version=v1.23.6-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220512231512-516044 --memory=2200 --kubernetes-version=v1.23.6-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (24.159813412s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220512231512-516044 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220512231512-516044 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220512231512-516044 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker: exit status 106 (88.96694ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220512231512-516044] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.23.6-rc.0 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20220512231512-516044
	    minikube start -p kubernetes-upgrade-20220512231512-516044 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220512231512-5160442 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.23.6-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220512231512-516044 --kubernetes-version=v1.23.6-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220512231512-516044 --memory=2200 --kubernetes-version=v1.23.6-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:282: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220512231512-516044 --memory=2200 --kubernetes-version=v1.23.6-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (3.589161903s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220512231512-516044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220512231512-516044
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220512231512-516044: (2.442524379s)
--- PASS: TestKubernetesUpgrade (86.06s)

                                                
                                    
x
+
TestMissingContainerUpgrade (125.74s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /tmp/minikube-v1.9.1.3516075308.exe start -p missing-upgrade-20220512231509-516044 --memory=2200 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: /tmp/minikube-v1.9.1.3516075308.exe start -p missing-upgrade-20220512231509-516044 --memory=2200 --driver=docker  --container-runtime=docker: (1m3.065879151s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-20220512231509-516044
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20220512231509-516044: (10.358440013s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-20220512231509-516044
version_upgrade_test.go:336: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-20220512231509-516044 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-20220512231509-516044 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (44.902793528s)
helpers_test.go:175: Cleaning up "missing-upgrade-20220512231509-516044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-20220512231509-516044
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-20220512231509-516044: (5.689280553s)
--- PASS: TestMissingContainerUpgrade (125.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220512231347-516044 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-20220512231347-516044 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (117.103982ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220512231347-516044] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220512231347-516044 --driver=docker  --container-runtime=docker
E0512 23:14:11.886862  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/ingress-addon-legacy-20220512225758-516044/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220512231347-516044 --driver=docker  --container-runtime=docker: (43.261035453s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220512231347-516044 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220512231347-516044 --no-kubernetes --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220512231347-516044 --no-kubernetes --driver=docker  --container-runtime=docker: (15.014196582s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220512231347-516044 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-20220512231347-516044 status -o json: exit status 2 (703.977139ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220512231347-516044","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-20220512231347-516044
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-20220512231347-516044: (2.47810473s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220512231347-516044 --no-kubernetes --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220512231347-516044 --no-kubernetes --driver=docker  --container-runtime=docker: (5.708310752s)
--- PASS: TestNoKubernetes/serial/Start (5.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.85s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (129.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /tmp/minikube-v1.9.0.4252041146.exe start -p stopped-upgrade-20220512231453-516044 --memory=2200 --vm-driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Non-zero exit: /tmp/minikube-v1.9.0.4252041146.exe start -p stopped-upgrade-20220512231453-516044 --memory=2200 --vm-driver=docker  --container-runtime=docker: exit status 70 (58.778798569s)

                                                
                                                
-- stdout --
	! [stopped-upgrade-20220512231453-516044] minikube v1.9.0 on Ubuntu 20.04
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - KUBECONFIG=/tmp/legacy_kubeconfig4034574151
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (8 available), Memory=2200MB (32103MB available) ...
	* Preparing Kubernetes v1.18.0 on Docker 19.03.2 ...
	  - kubeadm.pod-network-cidr=10.244.0.0/16

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.25.2 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.25.2
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 4.00 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 32.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 56.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 80.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 112.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 144.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 171.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 192.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 224.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 256.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 269.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 304.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 332.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 354.65 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 393.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 432.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 464.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 478.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 519.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB    > kubectl.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s    > kubeadm.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s    > kubelet.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s    > kubectl: 16.45 MiB / 41.98 MiB [--------->_______________] 39.17% ? p/s ?    > kubeadm: 23.29 MiB / 37.96 MiB [--------------->_________] 61.34% ? p/s ?    > kubelet: 14.27 MiB / 108.01 MiB [--->____________________]
13.22% ? p/s ?    > kubectl: 41.98 MiB / 41.98 MiB [--------------] 100.00% 363.26 MiB p/s 0s    > kubeadm: 37.96 MiB / 37.96 MiB [--------------] 100.00% 321.73 MiB p/s 0s    > kubelet: 57.88 MiB / 108.01 MiB [------------>___________] 53.59% ? p/s ?    > kubelet: 104.00 MiB / 108.01 MiB [---------------------->] 96.29% ? p/s ?    > kubelet: 108.01 MiB / 108.01 MiB [------------] 100.00% 235.98 MiB p/s 1s* 
	X Failed to update cluster: updating node: downloading binaries: downloading kubelet: chmod +x /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cache/linux/v1.18.0/kubelet: chmod /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/cache/linux/v1.18.0/kubelet.download: no such file or directory
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /tmp/minikube-v1.9.0.4252041146.exe start -p stopped-upgrade-20220512231453-516044 --memory=2200 --vm-driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: /tmp/minikube-v1.9.0.4252041146.exe start -p stopped-upgrade-20220512231453-516044 --memory=2200 --vm-driver=docker  --container-runtime=docker: (33.118086919s)
version_upgrade_test.go:199: (dbg) Run:  /tmp/minikube-v1.9.0.4252041146.exe -p stopped-upgrade-20220512231453-516044 stop

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:199: (dbg) Done: /tmp/minikube-v1.9.0.4252041146.exe -p stopped-upgrade-20220512231453-516044 stop: (12.173260845s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-20220512231453-516044 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-20220512231453-516044 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (23.98208549s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (129.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220512231347-516044 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220512231347-516044 "sudo systemctl is-active --quiet service kubelet": exit status 1 (383.278419ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-20220512231347-516044
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-20220512231347-516044: (1.973032234s)
--- PASS: TestNoKubernetes/serial/Stop (1.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220512231347-516044 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220512231347-516044 --driver=docker  --container-runtime=docker: (7.22109479s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220512231347-516044 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220512231347-516044 "sudo systemctl is-active --quiet service kubelet": exit status 1 (666.882521ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.67s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-20220512231453-516044
version_upgrade_test.go:213: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-20220512231453-516044: (1.683511497s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.68s)

                                                
                                    
x
+
TestPause/serial/Start (45.21s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220512231707-516044 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0512 23:17:07.581562  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/functional-20220512225541-516044/client.crt: no such file or directory

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220512231707-516044 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (45.209497632s)
--- PASS: TestPause/serial/Start (45.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (125.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220512231738-516044 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20220512231738-516044 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (2m5.597803172s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (125.60s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.46s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220512231707-516044 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220512231707-516044 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (6.428671924s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (57.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220512231753-516044 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20220512231753-516044 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6-rc.0: (57.990000673s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (57.99s)

                                                
                                    
x
+
TestPause/serial/Pause (0.96s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20220512231707-516044 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.96s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.52s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20220512231707-516044 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20220512231707-516044 --output=json --layout=cluster: exit status 2 (517.600027ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20220512231707-516044","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.25.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220512231707-516044","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.52s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.8s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20220512231707-516044 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.80s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.91s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20220512231707-516044 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.91s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.68s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20220512231707-516044 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20220512231707-516044 --alsologtostderr -v=5: (2.682206158s)
--- PASS: TestPause/serial/DeletePaused (2.68s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (8.67s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json

                                                
                                                
=== CONT  TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (8.543294886s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-20220512231707-516044
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-20220512231707-516044: exit status 1 (37.252609ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20220512231707-516044

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (8.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (250.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220512231813-516044 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20220512231813-516044 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5: (4m10.785463705s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (250.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (41.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220512231821-516044 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5
E0512 23:18:21.785503  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/skaffold-20220512231237-516044/client.crt: no such file or directory
E0512 23:18:21.825783  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/skaffold-20220512231237-516044/client.crt: no such file or directory
E0512 23:18:21.906593  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/skaffold-20220512231237-516044/client.crt: no such file or directory
E0512 23:18:22.066930  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/skaffold-20220512231237-516044/client.crt: no such file or directory
E0512 23:18:22.387339  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/skaffold-20220512231237-516044/client.crt: no such file or directory
E0512 23:18:23.027589  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/skaffold-20220512231237-516044/client.crt: no such file or directory
E0512 23:18:23.273012  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/addons-20220512225124-516044/client.crt: no such file or directory
E0512 23:18:24.307917  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/skaffold-20220512231237-516044/client.crt: no such file or directory
E0512 23:18:26.869210  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/skaffold-20220512231237-516044/client.crt: no such file or directory
E0512 23:18:30.629076  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/functional-20220512225541-516044/client.crt: no such file or directory
E0512 23:18:31.989535  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/skaffold-20220512231237-516044/client.crt: no such file or directory
E0512 23:18:42.229802  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/skaffold-20220512231237-516044/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20220512231821-516044 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5: (41.132510878s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (41.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context no-preload-20220512231753-516044 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [3cae7325-e76c-4e1e-a3f6-74e91b64edf8] Pending
helpers_test.go:342: "busybox" [3cae7325-e76c-4e1e-a3f6-74e91b64edf8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [3cae7325-e76c-4e1e-a3f6-74e91b64edf8] Running
start_stop_delete_test.go:198: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.01175617s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context no-preload-20220512231753-516044 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20220512231753-516044 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context no-preload-20220512231753-516044 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20220512231753-516044 --alsologtostderr -v=3
E0512 23:19:02.710678  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/skaffold-20220512231237-516044/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:230: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20220512231753-516044 --alsologtostderr -v=3: (10.880011395s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (10.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context default-k8s-different-port-20220512231821-516044 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [8acb2f76-8023-40c1-b58e-5060c6624794] Pending
helpers_test.go:342: "busybox" [8acb2f76-8023-40c1-b58e-5060c6624794] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [8acb2f76-8023-40c1-b58e-5060c6624794] Running
E0512 23:19:11.886793  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/ingress-addon-legacy-20220512225758-516044/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:198: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 10.013773262s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context default-k8s-different-port-20220512231821-516044 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (10.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220512231753-516044 -n no-preload-20220512231753-516044

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220512231753-516044 -n no-preload-20220512231753-516044: exit status 7 (107.958194ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20220512231753-516044 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (335.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220512231753-516044 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20220512231753-516044 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6-rc.0: (5m34.620440634s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220512231753-516044 -n no-preload-20220512231753-516044
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (335.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20220512231821-516044 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context default-k8s-different-port-20220512231821-516044 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (10.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20220512231821-516044 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20220512231821-516044 --alsologtostderr -v=3: (10.900484982s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (10.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220512231821-516044 -n default-k8s-different-port-20220512231821-516044
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220512231821-516044 -n default-k8s-different-port-20220512231821-516044: exit status 7 (112.804626ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20220512231821-516044 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (332.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220512231821-516044 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5
E0512 23:19:43.671298  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/skaffold-20220512231237-516044/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20220512231821-516044 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5: (5m31.704127516s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220512231821-516044 -n default-k8s-different-port-20220512231821-516044
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (332.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context old-k8s-version-20220512231738-516044 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [c752cb5e-7557-40f6-9426-53bfe12dcce1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [c752cb5e-7557-40f6-9426-53bfe12dcce1] Running
start_stop_delete_test.go:198: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.012557577s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context old-k8s-version-20220512231738-516044 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20220512231738-516044 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context old-k8s-version-20220512231738-516044 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20220512231738-516044 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20220512231738-516044 --alsologtostderr -v=3: (10.988643434s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220512231738-516044 -n old-k8s-version-20220512231738-516044
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220512231738-516044 -n old-k8s-version-20220512231738-516044: exit status 7 (103.997291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20220512231738-516044 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (402.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220512231738-516044 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E0512 23:20:34.931796  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/ingress-addon-legacy-20220512225758-516044/client.crt: no such file or directory
E0512 23:21:05.591658  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/skaffold-20220512231237-516044/client.crt: no such file or directory
E0512 23:22:07.581943  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/functional-20220512225541-516044/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20220512231738-516044 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (6m41.952831347s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220512231738-516044 -n old-k8s-version-20220512231738-516044
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (402.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context embed-certs-20220512231813-516044 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [24a267ab-be81-4f5c-aa29-10fc70ad5fda] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [24a267ab-be81-4f5c-aa29-10fc70ad5fda] Running
start_stop_delete_test.go:198: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.012571413s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context embed-certs-20220512231813-516044 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20220512231813-516044 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context embed-certs-20220512231813-516044 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20220512231813-516044 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20220512231813-516044 --alsologtostderr -v=3: (10.88166179s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220512231813-516044 -n embed-certs-20220512231813-516044
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220512231813-516044 -n embed-certs-20220512231813-516044: exit status 7 (100.334309ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20220512231813-516044 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (592.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220512231813-516044 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5
E0512 23:23:21.748919  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/skaffold-20220512231237-516044/client.crt: no such file or directory
E0512 23:23:23.273371  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/addons-20220512225124-516044/client.crt: no such file or directory
E0512 23:23:49.432138  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/skaffold-20220512231237-516044/client.crt: no such file or directory
E0512 23:24:11.886885  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/ingress-addon-legacy-20220512225758-516044/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20220512231813-516044 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5: (9m52.408454414s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220512231813-516044 -n embed-certs-20220512231813-516044
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (592.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (15.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-ls49k" [78e925ee-cce0-4c3a-80b9-91249968cfc0] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-8469778f77-ls49k" [78e925ee-cce0-4c3a-80b9-91249968cfc0] Running
start_stop_delete_test.go:276: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.051558616s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (15.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (18.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-rt6cq" [308fdfb8-b0a7-4cc1-8184-dde5a815a19f] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-8469778f77-rt6cq" [308fdfb8-b0a7-4cc1-8184-dde5a815a19f] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 18.013078292s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (18.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-ls49k" [78e925ee-cce0-4c3a-80b9-91249968cfc0] Running
start_stop_delete_test.go:289: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010820778s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context no-preload-20220512231753-516044 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-20220512231753-516044 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-20220512231753-516044 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220512231753-516044 -n no-preload-20220512231753-516044
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220512231753-516044 -n no-preload-20220512231753-516044: exit status 2 (439.897321ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220512231753-516044 -n no-preload-20220512231753-516044
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220512231753-516044 -n no-preload-20220512231753-516044: exit status 2 (449.750528ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-20220512231753-516044 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220512231753-516044 -n no-preload-20220512231753-516044
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220512231753-516044 -n no-preload-20220512231753-516044
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220512232515-516044 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6-rc.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220512232515-516044 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6-rc.0: (43.682520294s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-rt6cq" [308fdfb8-b0a7-4cc1-8184-dde5a815a19f] Running
start_stop_delete_test.go:289: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007598546s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context default-k8s-different-port-20220512231821-516044 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-different-port-20220512231821-516044 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (4.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-different-port-20220512231821-516044 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-different-port-20220512231821-516044 --alsologtostderr -v=1: (1.833858598s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220512231821-516044 -n default-k8s-different-port-20220512231821-516044
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220512231821-516044 -n default-k8s-different-port-20220512231821-516044: exit status 2 (403.814798ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220512231821-516044 -n default-k8s-different-port-20220512231821-516044
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220512231821-516044 -n default-k8s-different-port-20220512231821-516044: exit status 2 (457.3457ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-different-port-20220512231821-516044 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220512231821-516044 -n default-k8s-different-port-20220512231821-516044
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220512231821-516044 -n default-k8s-different-port-20220512231821-516044
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (4.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (46.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20220512231715-516044 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p auto-20220512231715-516044 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=docker: (46.782693866s)
--- PASS: TestNetworkPlugins/group/auto/Start (46.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20220512232515-516044 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.75s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20220512232515-516044 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20220512232515-516044 --alsologtostderr -v=3: (10.760997124s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220512232515-516044 -n newest-cni-20220512232515-516044
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220512232515-516044 -n newest-cni-20220512232515-516044: exit status 7 (101.832201ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20220512232515-516044 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (20.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220512232515-516044 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6-rc.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220512232515-516044 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6-rc.0: (19.748669573s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220512232515-516044 -n newest-cni-20220512232515-516044
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (20.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20220512231715-516044 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-20220512231715-516044 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-2db9d" [b41d61ec-9fe4-40c9-b5ee-158291649a8d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-2db9d" [b41d61ec-9fe4-40c9-b5ee-158291649a8d] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.010601074s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-20220512231715-516044 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-20220512231715-516044 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Non-zero exit: kubectl --context auto-20220512231715-516044 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.203117423s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:286: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20220512232515-516044 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20220512232515-516044 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220512232515-516044 -n newest-cni-20220512232515-516044
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220512232515-516044 -n newest-cni-20220512232515-516044: exit status 2 (399.205166ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220512232515-516044 -n newest-cni-20220512232515-516044
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220512232515-516044 -n newest-cni-20220512232515-516044: exit status 2 (398.873656ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-20220512232515-516044 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220512232515-516044 -n newest-cni-20220512232515-516044

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220512232515-516044 -n newest-cni-20220512232515-516044
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (42.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p false-20220512231715-516044 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p false-20220512231715-516044 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker  --container-runtime=docker: (42.797181907s)
--- PASS: TestNetworkPlugins/group/false/Start (42.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (80.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20220512231715-516044 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20220512231715-516044 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=docker: (1m20.552928789s)
--- PASS: TestNetworkPlugins/group/cilium/Start (80.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-6fb5469cf5-zzlkl" [f2ff28e8-32cf-4440-95be-1616adf13f38] Running
start_stop_delete_test.go:276: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01208006s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-6fb5469cf5-zzlkl" [f2ff28e8-32cf-4440-95be-1616adf13f38] Running
start_stop_delete_test.go:289: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006276399s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context old-k8s-version-20220512231738-516044 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20220512231738-516044 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20220512231738-516044 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220512231738-516044 -n old-k8s-version-20220512231738-516044
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220512231738-516044 -n old-k8s-version-20220512231738-516044: exit status 2 (494.908146ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220512231738-516044 -n old-k8s-version-20220512231738-516044
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220512231738-516044 -n old-k8s-version-20220512231738-516044: exit status 2 (458.309378ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-20220512231738-516044 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220512231738-516044 -n old-k8s-version-20220512231738-516044
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220512231738-516044 -n old-k8s-version-20220512231738-516044
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-20220512231715-516044 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context false-20220512231715-516044 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-mm9l6" [af44fdc6-b99f-4a0e-bfd4-1e39006b3330] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-mm9l6" [af44fdc6-b99f-4a0e-bfd4-1e39006b3330] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.008524756s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:188: (dbg) Run:  kubectl --context false-20220512231715-516044 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:238: (dbg) Run:  kubectl --context false-20220512231715-516044 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context false-20220512231715-516044 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.134761748s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-5tscv" [85986a42-de9c-4b28-9b7d-3727bd5b6fd8] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.021091819s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20220512231715-516044 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (11.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-20220512231715-516044 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context cilium-20220512231715-516044 replace --force -f testdata/netcat-deployment.yaml: (1.10479456s)
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-65gkq" [6eb61b3a-1c95-4483-af37-f2ff8387f9fa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-65gkq" [6eb61b3a-1c95-4483-af37-f2ff8387f9fa] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 10.006108014s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (11.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-20220512231715-516044 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-20220512231715-516044 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-20220512231715-516044 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (42.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20220512231715-516044 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0512 23:28:21.749072  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/skaffold-20220512231237-516044/client.crt: no such file or directory
E0512 23:28:23.272828  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/addons-20220512225124-516044/client.crt: no such file or directory
E0512 23:28:52.278879  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/no-preload-20220512231753-516044/client.crt: no such file or directory
E0512 23:28:52.284161  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/no-preload-20220512231753-516044/client.crt: no such file or directory
E0512 23:28:52.294409  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/no-preload-20220512231753-516044/client.crt: no such file or directory
E0512 23:28:52.314681  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/no-preload-20220512231753-516044/client.crt: no such file or directory
E0512 23:28:52.354966  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/no-preload-20220512231753-516044/client.crt: no such file or directory
E0512 23:28:52.435282  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/no-preload-20220512231753-516044/client.crt: no such file or directory
E0512 23:28:52.595651  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/no-preload-20220512231753-516044/client.crt: no such file or directory
E0512 23:28:52.916247  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/no-preload-20220512231753-516044/client.crt: no such file or directory
E0512 23:28:53.556627  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/no-preload-20220512231753-516044/client.crt: no such file or directory
E0512 23:28:54.836905  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/no-preload-20220512231753-516044/client.crt: no such file or directory
E0512 23:28:57.397258  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/no-preload-20220512231753-516044/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20220512231715-516044 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=docker: (42.140526587s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (42.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20220512231715-516044 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-20220512231715-516044 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-ch8fc" [7dbbad1c-62f0-414a-853c-465852870194] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0512 23:29:02.518143  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/no-preload-20220512231753-516044/client.crt: no such file or directory
E0512 23:29:03.178495  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/default-k8s-different-port-20220512231821-516044/client.crt: no such file or directory
E0512 23:29:03.183791  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/default-k8s-different-port-20220512231821-516044/client.crt: no such file or directory
E0512 23:29:03.194040  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/default-k8s-different-port-20220512231821-516044/client.crt: no such file or directory
E0512 23:29:03.214304  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/default-k8s-different-port-20220512231821-516044/client.crt: no such file or directory
E0512 23:29:03.254675  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/default-k8s-different-port-20220512231821-516044/client.crt: no such file or directory
E0512 23:29:03.335050  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/default-k8s-different-port-20220512231821-516044/client.crt: no such file or directory
E0512 23:29:03.496078  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/default-k8s-different-port-20220512231821-516044/client.crt: no such file or directory
E0512 23:29:03.816504  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/default-k8s-different-port-20220512231821-516044/client.crt: no such file or directory
E0512 23:29:04.456660  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/default-k8s-different-port-20220512231821-516044/client.crt: no such file or directory
helpers_test.go:342: "netcat-668db85669-ch8fc" [7dbbad1c-62f0-414a-853c-465852870194] Running
E0512 23:29:05.736801  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/default-k8s-different-port-20220512231821-516044/client.crt: no such file or directory
E0512 23:29:08.297378  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/default-k8s-different-port-20220512231821-516044/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.007862602s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-6z6nx" [6ffbcd0f-ff86-4fbc-906e-472268aebcf5] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0512 23:32:40.829313  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/false-20220512231715-516044/client.crt: no such file or directory
start_stop_delete_test.go:276: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01322067s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-6z6nx" [6ffbcd0f-ff86-4fbc-906e-472268aebcf5] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00767401s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context embed-certs-20220512231813-516044 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20220512231813-516044 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (44.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20220512231715-516044 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=docker
E0512 23:35:41.940601  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/cilium-20220512231715-516044/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20220512231715-516044 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=docker: (44.547647803s)
--- PASS: TestNetworkPlugins/group/bridge/Start (44.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (285.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-20220512231715-516044 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-20220512231715-516044 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (4m45.92960488s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (285.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20220512231715-516044 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-20220512231715-516044 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-njjbz" [250f3c9f-1e66-4ae7-9e4f-8be06e72f559] Pending
helpers_test.go:342: "netcat-668db85669-njjbz" [250f3c9f-1e66-4ae7-9e4f-8be06e72f559] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-njjbz" [250f3c9f-1e66-4ae7-9e4f-8be06e72f559] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.007539682s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-20220512231715-516044 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kubenet-20220512231715-516044 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-qzwqj" [47b14f79-c25d-48bb-b3b1-f4c318fa62ef] Pending
helpers_test.go:342: "netcat-668db85669-qzwqj" [47b14f79-c25d-48bb-b3b1-f4c318fa62ef] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-qzwqj" [47b14f79-c25d-48bb-b3b1-f4c318fa62ef] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.006238274s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.26s)

                                                
                                    

Test skip (21/281)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.5/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.5/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.23.5/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.6-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.6-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.23.6-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:448: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:545: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:105: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20220512231821-516044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20220512231821-516044
E0512 23:18:21.749236  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/skaffold-20220512231237-516044/client.crt: no such file or directory
E0512 23:18:21.754535  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/skaffold-20220512231237-516044/client.crt: no such file or directory
E0512 23:18:21.764856  516044 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-12739-512689-68de712bd09ffe1e21223c2fc0b3d10921a9e762/.minikube/profiles/skaffold-20220512231237-516044/client.crt: no such file or directory
--- SKIP: TestStartStop/group/disable-driver-mounts (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20220512231715-516044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p flannel-20220512231715-516044
--- SKIP: TestNetworkPlugins/group/flannel (0.49s)

                                                
                                    
Copied to clipboard