Test Report: Docker_Linux 13641

                    
                      2a71df5eb5ec0ca8243173c97a5614cea8fb2e82:2022-02-21:22745
                    
                

Test fail (9/279)

x
+
TestDownloadOnly/v1.23.5-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5-rc.0/cached-images
aaa_download_only_test.go:135: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.23.5-rc.0" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.23.5-rc.0: no such file or directory
aaa_download_only_test.go:135: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.23.5-rc.0" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.23.5-rc.0: no such file or directory
aaa_download_only_test.go:135: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.23.5-rc.0" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.23.5-rc.0: no such file or directory
aaa_download_only_test.go:135: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.23.5-rc.0" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.23.5-rc.0: no such file or directory
aaa_download_only_test.go:135: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/pause_3.6" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/pause_3.6: no such file or directory
aaa_download_only_test.go:135: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/etcd_3.5.1-0" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/etcd_3.5.1-0: no such file or directory
aaa_download_only_test.go:135: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.6" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.6: no such file or directory
aaa_download_only_test.go:135: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5: no such file or directory
aaa_download_only_test.go:135: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1: no such file or directory
aaa_download_only_test.go:135: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7: no such file or directory
--- FAIL: TestDownloadOnly/v1.23.5-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (553.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20220221084934-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=docker
E0221 08:54:33.149049    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-20220221084934-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=docker: exit status 80 (9m13.225436451s)

                                                
                                                
-- stdout --
	* [calico-20220221084934-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13641
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node calico-20220221084934-6550 in cluster calico-20220221084934-6550
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.4 on Docker 20.10.12 ...
	  - kubelet.housekeeping-interval=5m
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0221 08:54:31.669336  223679 out.go:297] Setting OutFile to fd 1 ...
	I0221 08:54:31.669431  223679 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0221 08:54:31.669456  223679 out.go:310] Setting ErrFile to fd 2...
	I0221 08:54:31.669459  223679 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0221 08:54:31.669575  223679 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/bin
	I0221 08:54:31.669863  223679 out.go:304] Setting JSON to false
	I0221 08:54:31.671533  223679 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2226,"bootTime":1645431446,"procs":815,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0221 08:54:31.671604  223679 start.go:122] virtualization: kvm guest
	I0221 08:54:31.674304  223679 out.go:176] * [calico-20220221084934-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	I0221 08:54:31.675747  223679 out.go:176]   - MINIKUBE_LOCATION=13641
	I0221 08:54:31.674505  223679 notify.go:193] Checking for updates...
	I0221 08:54:31.677072  223679 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0221 08:54:31.678381  223679 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig
	I0221 08:54:31.679665  223679 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube
	I0221 08:54:31.680895  223679 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0221 08:54:31.681490  223679 config.go:176] Loaded profile config "auto-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4
	I0221 08:54:31.681597  223679 config.go:176] Loaded profile config "cert-expiration-20220221085105-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4
	I0221 08:54:31.681682  223679 config.go:176] Loaded profile config "cilium-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4
	I0221 08:54:31.681731  223679 driver.go:344] Setting default libvirt URI to qemu:///system
	I0221 08:54:31.726270  223679 docker.go:132] docker version: linux-20.10.12
	I0221 08:54:31.726387  223679 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0221 08:54:31.828014  223679 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2022-02-21 08:54:31.757670791 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0221 08:54:31.828153  223679 docker.go:237] overlay module found
	I0221 08:54:31.830095  223679 out.go:176] * Using the docker driver based on user configuration
	I0221 08:54:31.830122  223679 start.go:281] selected driver: docker
	I0221 08:54:31.830127  223679 start.go:798] validating driver "docker" against <nil>
	I0221 08:54:31.830150  223679 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0221 08:54:31.830216  223679 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0221 08:54:31.830236  223679 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0221 08:54:31.831700  223679 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0221 08:54:31.832312  223679 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0221 08:54:31.933660  223679 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2022-02-21 08:54:31.865164378 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0221 08:54:31.933812  223679 start_flags.go:288] no existing cluster config was found, will generate one from the flags 
	I0221 08:54:31.933956  223679 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	I0221 08:54:31.933978  223679 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0221 08:54:31.933991  223679 cni.go:93] Creating CNI manager for "calico"
	I0221 08:54:31.934000  223679 start_flags.go:297] Found "Calico" CNI - setting NetworkPlugin=cni
	I0221 08:54:31.934009  223679 start_flags.go:302] config:
	{Name:calico-20220221084934-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:calico-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0221 08:54:31.936655  223679 out.go:176] * Starting control plane node calico-20220221084934-6550 in cluster calico-20220221084934-6550
	I0221 08:54:31.936718  223679 cache.go:120] Beginning downloading kic base image for docker with docker
	I0221 08:54:31.938119  223679 out.go:176] * Pulling base image ...
	I0221 08:54:31.938156  223679 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker
	I0221 08:54:31.938186  223679 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4
	I0221 08:54:31.938198  223679 cache.go:57] Caching tarball of preloaded images
	I0221 08:54:31.938250  223679 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon
	I0221 08:54:31.938441  223679 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0221 08:54:31.938462  223679 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.4 on docker
	I0221 08:54:31.938612  223679 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/config.json ...
	I0221 08:54:31.938638  223679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/config.json: {Name:mk6dfec3eeded4259016eef6692333e08748c03e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0221 08:54:32.001614  223679 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon, skipping pull
	I0221 08:54:32.001646  223679 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 exists in daemon, skipping load
	I0221 08:54:32.001665  223679 cache.go:208] Successfully downloaded all kic artifacts
	I0221 08:54:32.001710  223679 start.go:313] acquiring machines lock for calico-20220221084934-6550: {Name:mk9bd20451a3b8275874174c12a3c8e8fcabb93f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0221 08:54:32.001861  223679 start.go:317] acquired machines lock for "calico-20220221084934-6550" in 125.883µs
	I0221 08:54:32.001895  223679 start.go:89] Provisioning new machine with config: &{Name:calico-20220221084934-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:calico-20220221084934-6550 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0221 08:54:32.002014  223679 start.go:126] createHost starting for "" (driver="docker")
	I0221 08:54:32.004421  223679 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0221 08:54:32.004718  223679 start.go:160] libmachine.API.Create for "calico-20220221084934-6550" (driver="docker")
	I0221 08:54:32.004755  223679 client.go:168] LocalClient.Create starting
	I0221 08:54:32.004831  223679 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem
	I0221 08:54:32.004868  223679 main.go:130] libmachine: Decoding PEM data...
	I0221 08:54:32.004896  223679 main.go:130] libmachine: Parsing certificate...
	I0221 08:54:32.004981  223679 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem
	I0221 08:54:32.005006  223679 main.go:130] libmachine: Decoding PEM data...
	I0221 08:54:32.005024  223679 main.go:130] libmachine: Parsing certificate...
	I0221 08:54:32.005451  223679 cli_runner.go:133] Run: docker network inspect calico-20220221084934-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0221 08:54:32.041628  223679 cli_runner.go:180] docker network inspect calico-20220221084934-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0221 08:54:32.041708  223679 network_create.go:254] running [docker network inspect calico-20220221084934-6550] to gather additional debugging logs...
	I0221 08:54:32.041731  223679 cli_runner.go:133] Run: docker network inspect calico-20220221084934-6550
	W0221 08:54:32.081587  223679 cli_runner.go:180] docker network inspect calico-20220221084934-6550 returned with exit code 1
	I0221 08:54:32.081619  223679 network_create.go:257] error running [docker network inspect calico-20220221084934-6550]: docker network inspect calico-20220221084934-6550: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220221084934-6550
	I0221 08:54:32.081656  223679 network_create.go:259] output of [docker network inspect calico-20220221084934-6550]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220221084934-6550
	
	** /stderr **
	I0221 08:54:32.081716  223679 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0221 08:54:32.120427  223679 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-8af72e223855 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:58:a5:dd:c8}}
	I0221 08:54:32.121233  223679 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-3becfb688ac0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:ae:26:de:33}}
	I0221 08:54:32.122028  223679 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc000618270] misses:0}
	I0221 08:54:32.122088  223679 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0221 08:54:32.122116  223679 network_create.go:106] attempt to create docker network calico-20220221084934-6550 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0221 08:54:32.122177  223679 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220221084934-6550
	I0221 08:54:32.217845  223679 network_create.go:90] docker network calico-20220221084934-6550 192.168.67.0/24 created
	I0221 08:54:32.217884  223679 kic.go:106] calculated static IP "192.168.67.2" for the "calico-20220221084934-6550" container
	I0221 08:54:32.217960  223679 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0221 08:54:32.260460  223679 cli_runner.go:133] Run: docker volume create calico-20220221084934-6550 --label name.minikube.sigs.k8s.io=calico-20220221084934-6550 --label created_by.minikube.sigs.k8s.io=true
	I0221 08:54:32.294046  223679 oci.go:102] Successfully created a docker volume calico-20220221084934-6550
	I0221 08:54:32.294150  223679 cli_runner.go:133] Run: docker run --rm --name calico-20220221084934-6550-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220221084934-6550 --entrypoint /usr/bin/test -v calico-20220221084934-6550:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -d /var/lib
	I0221 08:54:32.998319  223679 oci.go:106] Successfully prepared a docker volume calico-20220221084934-6550
	I0221 08:54:32.998383  223679 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker
	I0221 08:54:32.998411  223679 kic.go:179] Starting extracting preloaded images to volume ...
	I0221 08:54:32.998566  223679 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220221084934-6550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir
	I0221 08:54:39.205880  223679 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220221084934-6550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir: (6.207231146s)
	I0221 08:54:39.205919  223679 kic.go:188] duration metric: took 6.207506 seconds to extract preloaded images to volume
	W0221 08:54:39.205955  223679 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0221 08:54:39.205964  223679 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0221 08:54:39.206012  223679 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0221 08:54:39.302203  223679 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220221084934-6550 --name calico-20220221084934-6550 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220221084934-6550 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220221084934-6550 --network calico-20220221084934-6550 --ip 192.168.67.2 --volume calico-20220221084934-6550:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2
	I0221 08:54:39.751892  223679 cli_runner.go:133] Run: docker container inspect calico-20220221084934-6550 --format={{.State.Running}}
	I0221 08:54:39.788728  223679 cli_runner.go:133] Run: docker container inspect calico-20220221084934-6550 --format={{.State.Status}}
	I0221 08:54:39.827631  223679 cli_runner.go:133] Run: docker exec calico-20220221084934-6550 stat /var/lib/dpkg/alternatives/iptables
	I0221 08:54:39.899385  223679 oci.go:281] the created container "calico-20220221084934-6550" has a running status.
	I0221 08:54:39.899415  223679 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/calico-20220221084934-6550/id_rsa...
	I0221 08:54:40.325976  223679 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/calico-20220221084934-6550/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0221 08:54:40.437286  223679 cli_runner.go:133] Run: docker container inspect calico-20220221084934-6550 --format={{.State.Status}}
	I0221 08:54:40.476120  223679 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0221 08:54:40.476145  223679 kic_runner.go:114] Args: [docker exec --privileged calico-20220221084934-6550 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0221 08:54:40.568825  223679 cli_runner.go:133] Run: docker container inspect calico-20220221084934-6550 --format={{.State.Status}}
	I0221 08:54:40.605419  223679 machine.go:88] provisioning docker machine ...
	I0221 08:54:40.605466  223679 ubuntu.go:169] provisioning hostname "calico-20220221084934-6550"
	I0221 08:54:40.605522  223679 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220221084934-6550
	I0221 08:54:40.645726  223679 main.go:130] libmachine: Using SSH client type: native
	I0221 08:54:40.645994  223679 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1100] 0x7a41e0 <nil>  [] 0s} 127.0.0.1 49364 <nil> <nil>}
	I0221 08:54:40.646023  223679 main.go:130] libmachine: About to run SSH command:
	sudo hostname calico-20220221084934-6550 && echo "calico-20220221084934-6550" | sudo tee /etc/hostname
	I0221 08:54:40.780620  223679 main.go:130] libmachine: SSH cmd err, output: <nil>: calico-20220221084934-6550
	
	I0221 08:54:40.780691  223679 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220221084934-6550
	I0221 08:54:40.814209  223679 main.go:130] libmachine: Using SSH client type: native
	I0221 08:54:40.814413  223679 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1100] 0x7a41e0 <nil>  [] 0s} 127.0.0.1 49364 <nil> <nil>}
	I0221 08:54:40.814449  223679 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20220221084934-6550' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20220221084934-6550/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20220221084934-6550' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0221 08:54:40.938947  223679 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0221 08:54:40.938980  223679 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube}
	I0221 08:54:40.939035  223679 ubuntu.go:177] setting up certificates
	I0221 08:54:40.939046  223679 provision.go:83] configureAuth start
	I0221 08:54:40.939089  223679 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220221084934-6550
	I0221 08:54:40.975796  223679 provision.go:138] copyHostCerts
	I0221 08:54:40.975850  223679 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem, removing ...
	I0221 08:54:40.975857  223679 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem
	I0221 08:54:40.975903  223679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem (1082 bytes)
	I0221 08:54:40.975970  223679 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem, removing ...
	I0221 08:54:40.975988  223679 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem
	I0221 08:54:40.976005  223679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem (1123 bytes)
	I0221 08:54:40.976063  223679 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem, removing ...
	I0221 08:54:40.976102  223679 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem
	I0221 08:54:40.976121  223679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem (1675 bytes)
	I0221 08:54:40.976166  223679 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem org=jenkins.calico-20220221084934-6550 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20220221084934-6550]
	I0221 08:54:41.313676  223679 provision.go:172] copyRemoteCerts
	I0221 08:54:41.313739  223679 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0221 08:54:41.313767  223679 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220221084934-6550
	I0221 08:54:41.349452  223679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49364 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/calico-20220221084934-6550/id_rsa Username:docker}
	I0221 08:54:41.438412  223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0221 08:54:41.457832  223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0221 08:54:41.476216  223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0221 08:54:41.495583  223679 provision.go:86] duration metric: configureAuth took 556.525196ms
	I0221 08:54:41.495616  223679 ubuntu.go:193] setting minikube options for container-runtime
	I0221 08:54:41.495815  223679 config.go:176] Loaded profile config "calico-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4
	I0221 08:54:41.495870  223679 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220221084934-6550
	I0221 08:54:41.533059  223679 main.go:130] libmachine: Using SSH client type: native
	I0221 08:54:41.533198  223679 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1100] 0x7a41e0 <nil>  [] 0s} 127.0.0.1 49364 <nil> <nil>}
	I0221 08:54:41.533213  223679 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0221 08:54:41.655048  223679 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0221 08:54:41.655077  223679 ubuntu.go:71] root file system type: overlay
	I0221 08:54:41.655267  223679 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0221 08:54:41.655327  223679 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220221084934-6550
	I0221 08:54:41.689366  223679 main.go:130] libmachine: Using SSH client type: native
	I0221 08:54:41.689505  223679 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1100] 0x7a41e0 <nil>  [] 0s} 127.0.0.1 49364 <nil> <nil>}
	I0221 08:54:41.689565  223679 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0221 08:54:41.822029  223679 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0221 08:54:41.822112  223679 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220221084934-6550
	I0221 08:54:41.859291  223679 main.go:130] libmachine: Using SSH client type: native
	I0221 08:54:41.859435  223679 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1100] 0x7a41e0 <nil>  [] 0s} 127.0.0.1 49364 <nil> <nil>}
	I0221 08:54:41.859452  223679 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0221 08:54:42.534877  223679 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-12-13 11:43:42.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-02-21 08:54:41.817826590 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0221 08:54:42.534914  223679 machine.go:91] provisioned docker machine in 1.929466074s
	I0221 08:54:42.534924  223679 client.go:171] LocalClient.Create took 10.53016081s
	I0221 08:54:42.534936  223679 start.go:168] duration metric: libmachine.API.Create for "calico-20220221084934-6550" took 10.530218344s
	I0221 08:54:42.534945  223679 start.go:267] post-start starting for "calico-20220221084934-6550" (driver="docker")
	I0221 08:54:42.534950  223679 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0221 08:54:42.535085  223679 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0221 08:54:42.535124  223679 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220221084934-6550
	I0221 08:54:42.570227  223679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49364 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/calico-20220221084934-6550/id_rsa Username:docker}
	I0221 08:54:42.659420  223679 ssh_runner.go:195] Run: cat /etc/os-release
	I0221 08:54:42.662549  223679 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0221 08:54:42.662589  223679 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0221 08:54:42.662602  223679 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0221 08:54:42.662610  223679 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0221 08:54:42.662627  223679 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/addons for local assets ...
	I0221 08:54:42.662691  223679 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files for local assets ...
	I0221 08:54:42.662786  223679 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem -> 65502.pem in /etc/ssl/certs
	I0221 08:54:42.662899  223679 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0221 08:54:42.670331  223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /etc/ssl/certs/65502.pem (1708 bytes)
	I0221 08:54:42.689477  223679 start.go:270] post-start completed in 154.520884ms
	I0221 08:54:42.689843  223679 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220221084934-6550
	I0221 08:54:42.730023  223679 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/config.json ...
	I0221 08:54:42.730315  223679 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0221 08:54:42.730369  223679 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220221084934-6550
	I0221 08:54:42.767727  223679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49364 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/calico-20220221084934-6550/id_rsa Username:docker}
	I0221 08:54:42.851528  223679 start.go:129] duration metric: createHost completed in 10.849499789s
	I0221 08:54:42.851567  223679 start.go:80] releasing machines lock for "calico-20220221084934-6550", held for 10.849686754s
	I0221 08:54:42.851656  223679 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220221084934-6550
	I0221 08:54:42.893166  223679 ssh_runner.go:195] Run: systemctl --version
	I0221 08:54:42.893224  223679 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0221 08:54:42.893229  223679 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220221084934-6550
	I0221 08:54:42.893280  223679 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220221084934-6550
	I0221 08:54:42.935097  223679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49364 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/calico-20220221084934-6550/id_rsa Username:docker}
	I0221 08:54:42.939437  223679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49364 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/calico-20220221084934-6550/id_rsa Username:docker}
	I0221 08:54:43.165553  223679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0221 08:54:43.176428  223679 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0221 08:54:43.186305  223679 cruntime.go:272] skipping containerd shutdown because we are bound to it
	I0221 08:54:43.186358  223679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0221 08:54:43.196307  223679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0221 08:54:43.209884  223679 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0221 08:54:43.297602  223679 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0221 08:54:43.367679  223679 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0221 08:54:43.377417  223679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0221 08:54:43.457703  223679 ssh_runner.go:195] Run: sudo systemctl start docker
	I0221 08:54:43.467810  223679 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0221 08:54:43.509287  223679 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0221 08:54:43.551952  223679 out.go:203] * Preparing Kubernetes v1.23.4 on Docker 20.10.12 ...
	I0221 08:54:43.552042  223679 cli_runner.go:133] Run: docker network inspect calico-20220221084934-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0221 08:54:43.590101  223679 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0221 08:54:43.593455  223679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0221 08:54:43.604974  223679 out.go:176]   - kubelet.housekeeping-interval=5m
	I0221 08:54:43.605063  223679 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker
	I0221 08:54:43.605146  223679 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0221 08:54:43.639090  223679 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.4
	k8s.gcr.io/kube-proxy:v1.23.4
	k8s.gcr.io/kube-controller-manager:v1.23.4
	k8s.gcr.io/kube-scheduler:v1.23.4
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0221 08:54:43.639119  223679 docker.go:537] Images already preloaded, skipping extraction
	I0221 08:54:43.639171  223679 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0221 08:54:43.676921  223679 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.4
	k8s.gcr.io/kube-proxy:v1.23.4
	k8s.gcr.io/kube-controller-manager:v1.23.4
	k8s.gcr.io/kube-scheduler:v1.23.4
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0221 08:54:43.676951  223679 cache_images.go:84] Images are preloaded, skipping loading
	I0221 08:54:43.677005  223679 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0221 08:54:43.775624  223679 cni.go:93] Creating CNI manager for "calico"
	I0221 08:54:43.775650  223679 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0221 08:54:43.775662  223679 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20220221084934-6550 NodeName:calico-20220221084934-6550 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0221 08:54:43.775783  223679 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "calico-20220221084934-6550"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0221 08:54:43.775860  223679 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=calico-20220221084934-6550 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.4 ClusterName:calico-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0221 08:54:43.775903  223679 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.4
	I0221 08:54:43.783049  223679 binaries.go:44] Found k8s binaries, skipping transfer
	I0221 08:54:43.783112  223679 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0221 08:54:43.790080  223679 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (400 bytes)
	I0221 08:54:43.803657  223679 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0221 08:54:43.817305  223679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
	I0221 08:54:43.832073  223679 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0221 08:54:43.835308  223679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0221 08:54:43.845202  223679 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550 for IP: 192.168.67.2
	I0221 08:54:43.845320  223679 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key
	I0221 08:54:43.845374  223679 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key
	I0221 08:54:43.845436  223679 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/client.key
	I0221 08:54:43.845456  223679 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/client.crt with IP's: []
	I0221 08:54:44.006432  223679 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/client.crt ...
	I0221 08:54:44.006474  223679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/client.crt: {Name:mk855fbba0271a5174ba2c17a62536f5fc002b45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0221 08:54:44.006707  223679 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/client.key ...
	I0221 08:54:44.006730  223679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/client.key: {Name:mk6b07f68ad6023650adafd135358280d1825bb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0221 08:54:44.006871  223679 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/apiserver.key.c7fa3a9e
	I0221 08:54:44.006897  223679 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0221 08:54:44.294014  223679 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/apiserver.crt.c7fa3a9e ...
	I0221 08:54:44.294052  223679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/apiserver.crt.c7fa3a9e: {Name:mkb18de625bf9d4b1da4d8c0e20b7c74d4689d72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0221 08:54:44.294290  223679 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/apiserver.key.c7fa3a9e ...
	I0221 08:54:44.294313  223679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/apiserver.key.c7fa3a9e: {Name:mk342d0f120f3782db5aaad19a32574ae0c04f8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0221 08:54:44.294434  223679 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/apiserver.crt
	I0221 08:54:44.294491  223679 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/apiserver.key
	I0221 08:54:44.294537  223679 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/proxy-client.key
	I0221 08:54:44.294551  223679 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/proxy-client.crt with IP's: []
	I0221 08:54:44.518976  223679 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/proxy-client.crt ...
	I0221 08:54:44.519036  223679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/proxy-client.crt: {Name:mk6f6f43267f4534ff28d48ba090d2600cf0e9de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0221 08:54:44.519265  223679 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/proxy-client.key ...
	I0221 08:54:44.519291  223679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/proxy-client.key: {Name:mk80acd65e2e1b5036bf09d5fa5ec12f9e2086fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0221 08:54:44.519541  223679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem (1338 bytes)
	W0221 08:54:44.519593  223679 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550_empty.pem, impossibly tiny 0 bytes
	I0221 08:54:44.519633  223679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem (1679 bytes)
	I0221 08:54:44.519678  223679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem (1082 bytes)
	I0221 08:54:44.519730  223679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem (1123 bytes)
	I0221 08:54:44.519770  223679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem (1675 bytes)
	I0221 08:54:44.519828  223679 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem (1708 bytes)
	I0221 08:54:44.521210  223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0221 08:54:44.558411  223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0221 08:54:44.579347  223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0221 08:54:44.604843  223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/calico-20220221084934-6550/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0221 08:54:44.627275  223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0221 08:54:44.648374  223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0221 08:54:44.669879  223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0221 08:54:44.689847  223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0221 08:54:44.709519  223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /usr/share/ca-certificates/65502.pem (1708 bytes)
	I0221 08:54:44.733150  223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0221 08:54:44.756964  223679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem --> /usr/share/ca-certificates/6550.pem (1338 bytes)
	I0221 08:54:44.778521  223679 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0221 08:54:44.793575  223679 ssh_runner.go:195] Run: openssl version
	I0221 08:54:44.798665  223679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65502.pem && ln -fs /usr/share/ca-certificates/65502.pem /etc/ssl/certs/65502.pem"
	I0221 08:54:44.808787  223679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65502.pem
	I0221 08:54:44.812470  223679 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb 21 08:30 /usr/share/ca-certificates/65502.pem
	I0221 08:54:44.812527  223679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65502.pem
	I0221 08:54:44.817903  223679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65502.pem /etc/ssl/certs/3ec20f2e.0"
	I0221 08:54:44.827601  223679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0221 08:54:44.865122  223679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0221 08:54:44.891782  223679 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb 21 08:26 /usr/share/ca-certificates/minikubeCA.pem
	I0221 08:54:44.891866  223679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0221 08:54:44.899116  223679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0221 08:54:44.909368  223679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6550.pem && ln -fs /usr/share/ca-certificates/6550.pem /etc/ssl/certs/6550.pem"
	I0221 08:54:44.920591  223679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6550.pem
	I0221 08:54:44.925480  223679 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb 21 08:30 /usr/share/ca-certificates/6550.pem
	I0221 08:54:44.925592  223679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6550.pem
	I0221 08:54:44.932674  223679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6550.pem /etc/ssl/certs/51391683.0"
	I0221 08:54:44.947547  223679 kubeadm.go:391] StartCluster: {Name:calico-20220221084934-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:calico-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false}
	I0221 08:54:44.947712  223679 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0221 08:54:44.991618  223679 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0221 08:54:44.998885  223679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0221 08:54:45.015354  223679 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0221 08:54:45.015414  223679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0221 08:54:45.028145  223679 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0221 08:54:45.028193  223679 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0221 08:54:45.659427  223679 out.go:203]   - Generating certificates and keys ...
	I0221 08:54:48.200933  223679 out.go:203]   - Booting up control plane ...
	I0221 08:55:02.748988  223679 out.go:203]   - Configuring RBAC rules ...
	I0221 08:55:03.208968  223679 cni.go:93] Creating CNI manager for "calico"
	I0221 08:55:03.211365  223679 out.go:176] * Configuring Calico (Container Networking Interface) ...
	I0221 08:55:03.211657  223679 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.4/kubectl ...
	I0221 08:55:03.211681  223679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202049 bytes)
	I0221 08:55:03.227608  223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0221 08:55:04.757338  223679 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.529692552s)
	I0221 08:55:04.757387  223679 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0221 08:55:04.757470  223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:04.757473  223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=calico-20220221084934-6550 minikube.k8s.io/updated_at=2022_02_21T08_55_04_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:04.850953  223679 ops.go:34] apiserver oom_adj: -16
	I0221 08:55:04.851063  223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:05.440068  223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:05.940254  223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:06.440215  223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:06.940222  223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:07.440213  223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:07.939923  223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:08.439546  223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:08.940223  223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:09.440124  223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:09.939702  223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:10.439575  223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:10.940202  223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:11.439703  223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:11.939963  223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:12.439836  223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:12.939553  223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:13.439654  223679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:13.497568  223679 kubeadm.go:1020] duration metric: took 8.740153817s to wait for elevateKubeSystemPrivileges.
	I0221 08:55:13.497601  223679 kubeadm.go:393] StartCluster complete in 28.550066987s
	I0221 08:55:13.497616  223679 settings.go:142] acquiring lock: {Name:mk4400923ef35d7d80e21aa000bc7683aef0fb09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0221 08:55:13.497683  223679 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig
	I0221 08:55:13.498747  223679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig: {Name:mkbaf7b3f4ffbe8a9d57707d423380523fa909e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0221 08:55:14.022464  223679 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20220221084934-6550" rescaled to 1
	I0221 08:55:14.022509  223679 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0221 08:55:14.024435  223679 out.go:176] * Verifying Kubernetes components...
	I0221 08:55:14.024485  223679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0221 08:55:14.022561  223679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0221 08:55:14.022577  223679 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0221 08:55:14.024575  223679 addons.go:65] Setting storage-provisioner=true in profile "calico-20220221084934-6550"
	I0221 08:55:14.022730  223679 config.go:176] Loaded profile config "calico-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4
	I0221 08:55:14.024592  223679 addons.go:65] Setting default-storageclass=true in profile "calico-20220221084934-6550"
	I0221 08:55:14.024599  223679 addons.go:153] Setting addon storage-provisioner=true in "calico-20220221084934-6550"
	I0221 08:55:14.024606  223679 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20220221084934-6550"
	W0221 08:55:14.024612  223679 addons.go:165] addon storage-provisioner should already be in state true
	I0221 08:55:14.024642  223679 host.go:66] Checking if "calico-20220221084934-6550" exists ...
	I0221 08:55:14.024913  223679 cli_runner.go:133] Run: docker container inspect calico-20220221084934-6550 --format={{.State.Status}}
	I0221 08:55:14.025104  223679 cli_runner.go:133] Run: docker container inspect calico-20220221084934-6550 --format={{.State.Status}}
	I0221 08:55:14.038203  223679 node_ready.go:35] waiting up to 5m0s for node "calico-20220221084934-6550" to be "Ready" ...
	I0221 08:55:14.042490  223679 node_ready.go:49] node "calico-20220221084934-6550" has status "Ready":"True"
	I0221 08:55:14.042526  223679 node_ready.go:38] duration metric: took 4.281504ms waiting for node "calico-20220221084934-6550" to be "Ready" ...
	I0221 08:55:14.042537  223679 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0221 08:55:14.064216  223679 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-zcdj6" in "kube-system" namespace to be "Ready" ...
	I0221 08:55:14.068536  223679 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0221 08:55:14.068650  223679 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0221 08:55:14.068667  223679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0221 08:55:14.068718  223679 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220221084934-6550
	I0221 08:55:14.071204  223679 addons.go:153] Setting addon default-storageclass=true in "calico-20220221084934-6550"
	W0221 08:55:14.071226  223679 addons.go:165] addon default-storageclass should already be in state true
	I0221 08:55:14.071248  223679 host.go:66] Checking if "calico-20220221084934-6550" exists ...
	I0221 08:55:14.071675  223679 cli_runner.go:133] Run: docker container inspect calico-20220221084934-6550 --format={{.State.Status}}
	I0221 08:55:14.095438  223679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0221 08:55:14.121614  223679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49364 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/calico-20220221084934-6550/id_rsa Username:docker}
	I0221 08:55:14.130797  223679 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0221 08:55:14.130824  223679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0221 08:55:14.130878  223679 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220221084934-6550
	I0221 08:55:14.166553  223679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49364 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/calico-20220221084934-6550/id_rsa Username:docker}
	I0221 08:55:14.505375  223679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0221 08:55:14.506353  223679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0221 08:55:16.015822  223679 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.92034198s)
	I0221 08:55:16.015851  223679 start.go:777] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0221 08:55:16.020294  223679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.514878245s)
	I0221 08:55:16.106779  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:55:16.116155  223679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.609765842s)
	I0221 08:55:16.117844  223679 out.go:176] * Enabled addons: default-storageclass, storage-provisioner
	I0221 08:55:16.117871  223679 addons.go:417] enableAddons completed in 2.095295955s
	I0221 08:55:18.608129  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:55:21.084145  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:55:23.583507  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:55:26.082513  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:55:28.584036  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:55:30.607366  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:55:32.608422  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:55:34.608830  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:55:37.082853  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:55:39.082914  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:55:41.084278  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:55:43.583801  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:55:46.104227  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:55:48.608316  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:55:51.082452  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:55:53.082812  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:55:55.604982  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:55:58.083480  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:00.107900  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:02.108600  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:04.109005  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:06.608183  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:09.083257  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:11.584369  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:13.603328  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:15.607461  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:17.608185  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:20.103368  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:22.106959  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:24.109509  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:26.606973  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:28.607609  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:31.082276  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:33.107320  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:35.583226  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:38.107435  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:40.606736  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:43.082434  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:45.107171  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:47.583447  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:49.608204  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:51.608560  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:54.108380  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:56.583351  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:59.083417  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:01.108902  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:03.608727  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:06.083201  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:08.606947  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:11.085043  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:13.606594  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:16.104269  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:18.582815  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:20.585066  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:23.083375  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:25.108449  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:27.607457  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:29.607786  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:32.085234  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:34.109374  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:36.583295  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:39.105966  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:41.606692  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:44.106976  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:46.583983  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:49.084072  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:51.112230  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:53.606853  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:55.607543  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:58.108377  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:00.608452  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:03.082697  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:05.107411  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:07.583427  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:10.086403  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:12.582090  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:14.607319  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:17.083915  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:19.607890  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:22.082238  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:24.107976  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:26.608511  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:29.107566  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:31.108790  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:33.582823  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:35.586175  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:37.607126  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:40.082258  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:42.108072  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:44.607510  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:46.608936  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:48.609972  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:51.082477  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:53.105968  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:55.582165  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:57.606112  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:59.608167  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:59:02.106572  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:59:04.107313  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:59:06.108123  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:59:08.108992  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:59:10.582664  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:59:12.583673  223679 pod_ready.go:102] pod "calico-node-zcdj6" in "kube-system" namespace has status "Ready":"False"
	I0221 08:59:14.112706  223679 pod_ready.go:81] duration metric: took 4m0.048450561s waiting for pod "calico-node-zcdj6" in "kube-system" namespace to be "Ready" ...
	E0221 08:59:14.112734  223679 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0221 08:59:14.112746  223679 pod_ready.go:78] waiting up to 5m0s for pod "etcd-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ...
	I0221 08:59:14.117793  223679 pod_ready.go:92] pod "etcd-calico-20220221084934-6550" in "kube-system" namespace has status "Ready":"True"
	I0221 08:59:14.117820  223679 pod_ready.go:81] duration metric: took 5.066157ms waiting for pod "etcd-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ...
	I0221 08:59:14.117832  223679 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ...
	I0221 08:59:14.122627  223679 pod_ready.go:92] pod "kube-apiserver-calico-20220221084934-6550" in "kube-system" namespace has status "Ready":"True"
	I0221 08:59:14.122647  223679 pod_ready.go:81] duration metric: took 4.807147ms waiting for pod "kube-apiserver-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ...
	I0221 08:59:14.122656  223679 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ...
	I0221 08:59:14.127594  223679 pod_ready.go:92] pod "kube-controller-manager-calico-20220221084934-6550" in "kube-system" namespace has status "Ready":"True"
	I0221 08:59:14.127616  223679 pod_ready.go:81] duration metric: took 4.954276ms waiting for pod "kube-controller-manager-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ...
	I0221 08:59:14.127627  223679 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-kwcvx" in "kube-system" namespace to be "Ready" ...
	I0221 08:59:14.480801  223679 pod_ready.go:92] pod "kube-proxy-kwcvx" in "kube-system" namespace has status "Ready":"True"
	I0221 08:59:14.480829  223679 pod_ready.go:81] duration metric: took 353.19554ms waiting for pod "kube-proxy-kwcvx" in "kube-system" namespace to be "Ready" ...
	I0221 08:59:14.480842  223679 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ...
	I0221 08:59:14.879906  223679 pod_ready.go:92] pod "kube-scheduler-calico-20220221084934-6550" in "kube-system" namespace has status "Ready":"True"
	I0221 08:59:14.879927  223679 pod_ready.go:81] duration metric: took 399.077104ms waiting for pod "kube-scheduler-calico-20220221084934-6550" in "kube-system" namespace to be "Ready" ...
	I0221 08:59:14.879937  223679 pod_ready.go:38] duration metric: took 4m0.837387313s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0221 08:59:14.879961  223679 api_server.go:51] waiting for apiserver process to appear ...
	I0221 08:59:14.880012  223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0221 08:59:14.942433  223679 logs.go:274] 1 containers: [5b808a7ef4a2]
	I0221 08:59:14.942510  223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0221 08:59:15.037787  223679 logs.go:274] 1 containers: [96cc9489b33e]
	I0221 08:59:15.037848  223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0221 08:59:15.134487  223679 logs.go:274] 0 containers: []
	W0221 08:59:15.134520  223679 logs.go:276] No container was found matching "coredns"
	I0221 08:59:15.134573  223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0221 08:59:15.229656  223679 logs.go:274] 1 containers: [f012d1d45e22]
	I0221 08:59:15.229733  223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0221 08:59:15.320906  223679 logs.go:274] 1 containers: [449cc37a92fe]
	I0221 08:59:15.320985  223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0221 08:59:15.417453  223679 logs.go:274] 0 containers: []
	W0221 08:59:15.417481  223679 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0221 08:59:15.417528  223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0221 08:59:15.513893  223679 logs.go:274] 2 containers: [528acfa448ce f6cf402c0c9d]
	I0221 08:59:15.513990  223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0221 08:59:15.550415  223679 logs.go:274] 1 containers: [cddc9ef001f2]
	I0221 08:59:15.550454  223679 logs.go:123] Gathering logs for dmesg ...
	I0221 08:59:15.550465  223679 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0221 08:59:15.576242  223679 logs.go:123] Gathering logs for etcd [96cc9489b33e] ...
	I0221 08:59:15.576295  223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc9489b33e"
	I0221 08:59:15.618102  223679 logs.go:123] Gathering logs for kube-proxy [449cc37a92fe] ...
	I0221 08:59:15.618136  223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 449cc37a92fe"
	I0221 08:59:15.656954  223679 logs.go:123] Gathering logs for storage-provisioner [528acfa448ce] ...
	I0221 08:59:15.656987  223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 528acfa448ce"
	I0221 08:59:15.722111  223679 logs.go:123] Gathering logs for storage-provisioner [f6cf402c0c9d] ...
	I0221 08:59:15.722147  223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6cf402c0c9d"
	I0221 08:59:15.808702  223679 logs.go:123] Gathering logs for Docker ...
	I0221 08:59:15.808737  223679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0221 08:59:15.889269  223679 logs.go:123] Gathering logs for container status ...
	I0221 08:59:15.889312  223679 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0221 08:59:15.945538  223679 logs.go:123] Gathering logs for kubelet ...
	I0221 08:59:15.945571  223679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0221 08:59:16.147141  223679 logs.go:123] Gathering logs for describe nodes ...
	I0221 08:59:16.147186  223679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0221 08:59:16.338070  223679 logs.go:123] Gathering logs for kube-apiserver [5b808a7ef4a2] ...
	I0221 08:59:16.338111  223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b808a7ef4a2"
	I0221 08:59:16.431605  223679 logs.go:123] Gathering logs for kube-scheduler [f012d1d45e22] ...
	I0221 08:59:16.431645  223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f012d1d45e22"
	I0221 08:59:16.530228  223679 logs.go:123] Gathering logs for kube-controller-manager [cddc9ef001f2] ...
	I0221 08:59:16.530264  223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cddc9ef001f2"
	I0221 08:59:19.103148  223679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0221 08:59:19.129062  223679 api_server.go:71] duration metric: took 4m5.106529752s to wait for apiserver process to appear ...
	I0221 08:59:19.129100  223679 api_server.go:87] waiting for apiserver healthz status ...
	I0221 08:59:19.129165  223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0221 08:59:19.224393  223679 logs.go:274] 1 containers: [5b808a7ef4a2]
	I0221 08:59:19.224460  223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0221 08:59:19.319828  223679 logs.go:274] 1 containers: [96cc9489b33e]
	I0221 08:59:19.319900  223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0221 08:59:19.418463  223679 logs.go:274] 0 containers: []
	W0221 08:59:19.418495  223679 logs.go:276] No container was found matching "coredns"
	I0221 08:59:19.418541  223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0221 08:59:19.516431  223679 logs.go:274] 1 containers: [f012d1d45e22]
	I0221 08:59:19.516522  223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0221 08:59:19.607457  223679 logs.go:274] 1 containers: [449cc37a92fe]
	I0221 08:59:19.607543  223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0221 08:59:19.644308  223679 logs.go:274] 0 containers: []
	W0221 08:59:19.644330  223679 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0221 08:59:19.644368  223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0221 08:59:19.677987  223679 logs.go:274] 1 containers: [528acfa448ce]
	I0221 08:59:19.678065  223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0221 08:59:19.711573  223679 logs.go:274] 1 containers: [cddc9ef001f2]
	I0221 08:59:19.711614  223679 logs.go:123] Gathering logs for dmesg ...
	I0221 08:59:19.711634  223679 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0221 08:59:19.739316  223679 logs.go:123] Gathering logs for describe nodes ...
	I0221 08:59:19.739352  223679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0221 08:59:19.829642  223679 logs.go:123] Gathering logs for etcd [96cc9489b33e] ...
	I0221 08:59:19.829686  223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc9489b33e"
	I0221 08:59:19.928327  223679 logs.go:123] Gathering logs for kube-scheduler [f012d1d45e22] ...
	I0221 08:59:19.928367  223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f012d1d45e22"
	I0221 08:59:20.030039  223679 logs.go:123] Gathering logs for storage-provisioner [528acfa448ce] ...
	I0221 08:59:20.030084  223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 528acfa448ce"
	I0221 08:59:20.115493  223679 logs.go:123] Gathering logs for kubelet ...
	I0221 08:59:20.115539  223679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0221 08:59:20.289828  223679 logs.go:123] Gathering logs for kube-proxy [449cc37a92fe] ...
	I0221 08:59:20.289874  223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 449cc37a92fe"
	I0221 08:59:20.351337  223679 logs.go:123] Gathering logs for kube-controller-manager [cddc9ef001f2] ...
	I0221 08:59:20.351388  223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cddc9ef001f2"
	I0221 08:59:20.480018  223679 logs.go:123] Gathering logs for Docker ...
	I0221 08:59:20.480056  223679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0221 08:59:20.594320  223679 logs.go:123] Gathering logs for container status ...
	I0221 08:59:20.594358  223679 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0221 08:59:20.641023  223679 logs.go:123] Gathering logs for kube-apiserver [5b808a7ef4a2] ...
	I0221 08:59:20.641062  223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b808a7ef4a2"
	I0221 08:59:23.238237  223679 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0221 08:59:23.244347  223679 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0221 08:59:23.246494  223679 api_server.go:140] control plane version: v1.23.4
	I0221 08:59:23.246519  223679 api_server.go:130] duration metric: took 4.1174116s to wait for apiserver health ...
	I0221 08:59:23.246529  223679 system_pods.go:43] waiting for kube-system pods to appear ...
	I0221 08:59:23.246581  223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0221 08:59:23.331088  223679 logs.go:274] 1 containers: [5b808a7ef4a2]
	I0221 08:59:23.331164  223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0221 08:59:23.425220  223679 logs.go:274] 1 containers: [96cc9489b33e]
	I0221 08:59:23.425297  223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0221 08:59:23.510198  223679 logs.go:274] 0 containers: []
	W0221 08:59:23.510230  223679 logs.go:276] No container was found matching "coredns"
	I0221 08:59:23.510284  223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0221 08:59:23.548794  223679 logs.go:274] 1 containers: [f012d1d45e22]
	I0221 08:59:23.548859  223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0221 08:59:23.642803  223679 logs.go:274] 1 containers: [449cc37a92fe]
	I0221 08:59:23.642891  223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0221 08:59:23.735232  223679 logs.go:274] 0 containers: []
	W0221 08:59:23.735263  223679 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0221 08:59:23.735316  223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0221 08:59:23.820175  223679 logs.go:274] 1 containers: [528acfa448ce]
	I0221 08:59:23.820245  223679 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0221 08:59:23.911162  223679 logs.go:274] 1 containers: [cddc9ef001f2]
	I0221 08:59:23.911205  223679 logs.go:123] Gathering logs for storage-provisioner [528acfa448ce] ...
	I0221 08:59:23.911218  223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 528acfa448ce"
	I0221 08:59:24.010277  223679 logs.go:123] Gathering logs for kubelet ...
	I0221 08:59:24.010307  223679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0221 08:59:24.188331  223679 logs.go:123] Gathering logs for dmesg ...
	I0221 08:59:24.188378  223679 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0221 08:59:24.235517  223679 logs.go:123] Gathering logs for describe nodes ...
	I0221 08:59:24.235564  223679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0221 08:59:24.433778  223679 logs.go:123] Gathering logs for kube-scheduler [f012d1d45e22] ...
	I0221 08:59:24.433815  223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f012d1d45e22"
	I0221 08:59:24.542462  223679 logs.go:123] Gathering logs for Docker ...
	I0221 08:59:24.542562  223679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0221 08:59:24.683898  223679 logs.go:123] Gathering logs for container status ...
	I0221 08:59:24.683938  223679 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0221 08:59:24.747804  223679 logs.go:123] Gathering logs for kube-apiserver [5b808a7ef4a2] ...
	I0221 08:59:24.747846  223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b808a7ef4a2"
	I0221 08:59:24.839623  223679 logs.go:123] Gathering logs for etcd [96cc9489b33e] ...
	I0221 08:59:24.839664  223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc9489b33e"
	I0221 08:59:24.933214  223679 logs.go:123] Gathering logs for kube-proxy [449cc37a92fe] ...
	I0221 08:59:24.933249  223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 449cc37a92fe"
	I0221 08:59:24.970081  223679 logs.go:123] Gathering logs for kube-controller-manager [cddc9ef001f2] ...
	I0221 08:59:24.970115  223679 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cddc9ef001f2"
	I0221 08:59:27.559651  223679 system_pods.go:59] 9 kube-system pods found
	I0221 08:59:27.559689  223679 system_pods.go:61] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0221 08:59:27.559697  223679 system_pods.go:61] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0221 08:59:27.559703  223679 system_pods.go:61] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0221 08:59:27.559708  223679 system_pods.go:61] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running
	I0221 08:59:27.559713  223679 system_pods.go:61] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running
	I0221 08:59:27.559717  223679 system_pods.go:61] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running
	I0221 08:59:27.559722  223679 system_pods.go:61] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running
	I0221 08:59:27.559726  223679 system_pods.go:61] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running
	I0221 08:59:27.559734  223679 system_pods.go:61] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0221 08:59:27.559742  223679 system_pods.go:74] duration metric: took 4.313209437s to wait for pod list to return data ...
	I0221 08:59:27.559749  223679 default_sa.go:34] waiting for default service account to be created ...
	I0221 08:59:27.562671  223679 default_sa.go:45] found service account: "default"
	I0221 08:59:27.562697  223679 default_sa.go:55] duration metric: took 2.939018ms for default service account to be created ...
	I0221 08:59:27.562709  223679 system_pods.go:116] waiting for k8s-apps to be running ...
	I0221 08:59:27.606750  223679 system_pods.go:86] 9 kube-system pods found
	I0221 08:59:27.606791  223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0221 08:59:27.606820  223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0221 08:59:27.606832  223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0221 08:59:27.606849  223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running
	I0221 08:59:27.606856  223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running
	I0221 08:59:27.606863  223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running
	I0221 08:59:27.606870  223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running
	I0221 08:59:27.606880  223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running
	I0221 08:59:27.606889  223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0221 08:59:27.606913  223679 retry.go:31] will retry after 263.082536ms: missing components: kube-dns
	I0221 08:59:27.875522  223679 system_pods.go:86] 9 kube-system pods found
	I0221 08:59:27.875558  223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0221 08:59:27.875569  223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0221 08:59:27.875575  223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0221 08:59:27.875581  223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running
	I0221 08:59:27.875586  223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running
	I0221 08:59:27.875590  223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running
	I0221 08:59:27.875593  223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running
	I0221 08:59:27.875598  223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running
	I0221 08:59:27.875603  223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0221 08:59:27.875619  223679 retry.go:31] will retry after 381.329545ms: missing components: kube-dns
	I0221 08:59:28.262703  223679 system_pods.go:86] 9 kube-system pods found
	I0221 08:59:28.262737  223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0221 08:59:28.262745  223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0221 08:59:28.262752  223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0221 08:59:28.262757  223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running
	I0221 08:59:28.262764  223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running
	I0221 08:59:28.262770  223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running
	I0221 08:59:28.262776  223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running
	I0221 08:59:28.262782  223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running
	I0221 08:59:28.262789  223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0221 08:59:28.262812  223679 retry.go:31] will retry after 422.765636ms: missing components: kube-dns
	I0221 08:59:28.708387  223679 system_pods.go:86] 9 kube-system pods found
	I0221 08:59:28.708425  223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0221 08:59:28.708467  223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0221 08:59:28.708488  223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0221 08:59:28.708506  223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running
	I0221 08:59:28.708519  223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running
	I0221 08:59:28.708531  223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running
	I0221 08:59:28.708537  223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running
	I0221 08:59:28.708544  223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running
	I0221 08:59:28.708559  223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0221 08:59:28.708575  223679 retry.go:31] will retry after 473.074753ms: missing components: kube-dns
	I0221 08:59:29.187326  223679 system_pods.go:86] 9 kube-system pods found
	I0221 08:59:29.187359  223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0221 08:59:29.187367  223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0221 08:59:29.187374  223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0221 08:59:29.187379  223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running
	I0221 08:59:29.187384  223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running
	I0221 08:59:29.187388  223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running
	I0221 08:59:29.187392  223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running
	I0221 08:59:29.187396  223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running
	I0221 08:59:29.187401  223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0221 08:59:29.187414  223679 retry.go:31] will retry after 587.352751ms: missing components: kube-dns
	I0221 08:59:29.807999  223679 system_pods.go:86] 9 kube-system pods found
	I0221 08:59:29.808041  223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0221 08:59:29.808052  223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0221 08:59:29.808062  223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0221 08:59:29.808069  223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running
	I0221 08:59:29.808077  223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running
	I0221 08:59:29.808087  223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running
	I0221 08:59:29.808093  223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running
	I0221 08:59:29.808103  223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running
	I0221 08:59:29.808113  223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0221 08:59:29.808133  223679 retry.go:31] will retry after 834.206799ms: missing components: kube-dns
	I0221 08:59:30.649684  223679 system_pods.go:86] 9 kube-system pods found
	I0221 08:59:30.649731  223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0221 08:59:30.649746  223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0221 08:59:30.649756  223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0221 08:59:30.649766  223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running
	I0221 08:59:30.649778  223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running
	I0221 08:59:30.649792  223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running
	I0221 08:59:30.649806  223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running
	I0221 08:59:30.649817  223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running
	I0221 08:59:30.649831  223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0221 08:59:30.649852  223679 retry.go:31] will retry after 746.553905ms: missing components: kube-dns
	I0221 08:59:31.403363  223679 system_pods.go:86] 9 kube-system pods found
	I0221 08:59:31.403414  223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0221 08:59:31.403426  223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0221 08:59:31.403438  223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0221 08:59:31.403446  223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running
	I0221 08:59:31.403455  223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running
	I0221 08:59:31.403466  223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running
	I0221 08:59:31.403474  223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running
	I0221 08:59:31.403488  223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running
	I0221 08:59:31.403498  223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0221 08:59:31.403522  223679 retry.go:31] will retry after 987.362415ms: missing components: kube-dns
	I0221 08:59:32.397015  223679 system_pods.go:86] 9 kube-system pods found
	I0221 08:59:32.397055  223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0221 08:59:32.397064  223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0221 08:59:32.397075  223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0221 08:59:32.397083  223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running
	I0221 08:59:32.397090  223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running
	I0221 08:59:32.397103  223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running
	I0221 08:59:32.397110  223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running
	I0221 08:59:32.397121  223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running
	I0221 08:59:32.397132  223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0221 08:59:32.397148  223679 retry.go:31] will retry after 1.189835008s: missing components: kube-dns
	I0221 08:59:33.607429  223679 system_pods.go:86] 9 kube-system pods found
	I0221 08:59:33.607467  223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0221 08:59:33.607475  223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0221 08:59:33.607484  223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0221 08:59:33.607493  223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running
	I0221 08:59:33.607500  223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running
	I0221 08:59:33.607507  223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running
	I0221 08:59:33.607531  223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running
	I0221 08:59:33.607541  223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running
	I0221 08:59:33.607550  223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0221 08:59:33.607570  223679 retry.go:31] will retry after 1.677229867s: missing components: kube-dns
	I0221 08:59:35.291721  223679 system_pods.go:86] 9 kube-system pods found
	I0221 08:59:35.291757  223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0221 08:59:35.291767  223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0221 08:59:35.291776  223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0221 08:59:35.291783  223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running
	I0221 08:59:35.291792  223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running
	I0221 08:59:35.291798  223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running
	I0221 08:59:35.291809  223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running
	I0221 08:59:35.291815  223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running
	I0221 08:59:35.291826  223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0221 08:59:35.291840  223679 retry.go:31] will retry after 2.346016261s: missing components: kube-dns
	I0221 08:59:37.644075  223679 system_pods.go:86] 9 kube-system pods found
	I0221 08:59:37.644109  223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0221 08:59:37.644117  223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0221 08:59:37.644124  223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0221 08:59:37.644131  223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running
	I0221 08:59:37.644136  223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running
	I0221 08:59:37.644140  223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running
	I0221 08:59:37.644144  223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running
	I0221 08:59:37.644147  223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running
	I0221 08:59:37.644153  223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0221 08:59:37.644169  223679 retry.go:31] will retry after 3.36678925s: missing components: kube-dns
	I0221 08:59:41.020218  223679 system_pods.go:86] 9 kube-system pods found
	I0221 08:59:41.020262  223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0221 08:59:41.020274  223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0221 08:59:41.020284  223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0221 08:59:41.020290  223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running
	I0221 08:59:41.020296  223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running
	I0221 08:59:41.020301  223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running
	I0221 08:59:41.020307  223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running
	I0221 08:59:41.020324  223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running
	I0221 08:59:41.020332  223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0221 08:59:41.020346  223679 retry.go:31] will retry after 3.11822781s: missing components: kube-dns
	I0221 08:59:44.146493  223679 system_pods.go:86] 9 kube-system pods found
	I0221 08:59:44.146526  223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0221 08:59:44.146534  223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0221 08:59:44.146544  223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0221 08:59:44.146552  223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running
	I0221 08:59:44.146563  223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running
	I0221 08:59:44.146570  223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running
	I0221 08:59:44.146582  223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running
	I0221 08:59:44.146593  223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running
	I0221 08:59:44.146603  223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0221 08:59:44.146623  223679 retry.go:31] will retry after 4.276119362s: missing components: kube-dns
	I0221 08:59:48.430784  223679 system_pods.go:86] 9 kube-system pods found
	I0221 08:59:48.430822  223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0221 08:59:48.430855  223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0221 08:59:48.430867  223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0221 08:59:48.430880  223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running
	I0221 08:59:48.430889  223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running
	I0221 08:59:48.430901  223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running
	I0221 08:59:48.430911  223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running
	I0221 08:59:48.430921  223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running
	I0221 08:59:48.430931  223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0221 08:59:48.431005  223679 retry.go:31] will retry after 5.167232101s: missing components: kube-dns
	I0221 08:59:53.607863  223679 system_pods.go:86] 9 kube-system pods found
	I0221 08:59:53.607910  223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0221 08:59:53.607925  223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0221 08:59:53.607936  223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0221 08:59:53.607950  223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running
	I0221 08:59:53.607957  223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running
	I0221 08:59:53.607965  223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running
	I0221 08:59:53.607971  223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running
	I0221 08:59:53.607979  223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running
	I0221 08:59:53.607991  223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0221 08:59:53.608009  223679 retry.go:31] will retry after 6.994901864s: missing components: kube-dns
	I0221 09:00:00.608725  223679 system_pods.go:86] 9 kube-system pods found
	I0221 09:00:00.608757  223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0221 09:00:00.608767  223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0221 09:00:00.608774  223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0221 09:00:00.608778  223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running
	I0221 09:00:00.608783  223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running
	I0221 09:00:00.608788  223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running
	I0221 09:00:00.608791  223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running
	I0221 09:00:00.608796  223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running
	I0221 09:00:00.608801  223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0221 09:00:00.608818  223679 retry.go:31] will retry after 7.91826225s: missing components: kube-dns
	I0221 09:00:08.534545  223679 system_pods.go:86] 9 kube-system pods found
	I0221 09:00:08.534589  223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0221 09:00:08.534602  223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0221 09:00:08.534613  223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0221 09:00:08.534621  223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running
	I0221 09:00:08.534630  223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running
	I0221 09:00:08.534642  223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running
	I0221 09:00:08.534654  223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running
	I0221 09:00:08.534665  223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running
	I0221 09:00:08.534678  223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0221 09:00:08.534700  223679 retry.go:31] will retry after 9.953714808s: missing components: kube-dns
	I0221 09:00:18.494832  223679 system_pods.go:86] 9 kube-system pods found
	I0221 09:00:18.494873  223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0221 09:00:18.494884  223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0221 09:00:18.494893  223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0221 09:00:18.494898  223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running
	I0221 09:00:18.494903  223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running
	I0221 09:00:18.494909  223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running
	I0221 09:00:18.494918  223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running
	I0221 09:00:18.494925  223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running
	I0221 09:00:18.494935  223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0221 09:00:18.494956  223679 retry.go:31] will retry after 15.120437328s: missing components: kube-dns
	I0221 09:00:33.622907  223679 system_pods.go:86] 9 kube-system pods found
	I0221 09:00:33.622950  223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0221 09:00:33.622961  223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0221 09:00:33.622970  223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0221 09:00:33.622977  223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running
	I0221 09:00:33.622983  223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running
	I0221 09:00:33.622989  223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running
	I0221 09:00:33.623036  223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running
	I0221 09:00:33.623050  223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running
	I0221 09:00:33.623058  223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0221 09:00:33.623079  223679 retry.go:31] will retry after 14.90607158s: missing components: kube-dns
	I0221 09:00:48.536869  223679 system_pods.go:86] 9 kube-system pods found
	I0221 09:00:48.536919  223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0221 09:00:48.536931  223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0221 09:00:48.536941  223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0221 09:00:48.536949  223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running
	I0221 09:00:48.536955  223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running
	I0221 09:00:48.536959  223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running
	I0221 09:00:48.536964  223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running
	I0221 09:00:48.536968  223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running
	I0221 09:00:48.536982  223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running
	I0221 09:00:48.536998  223679 retry.go:31] will retry after 18.465989061s: missing components: kube-dns
	I0221 09:01:07.010825  223679 system_pods.go:86] 9 kube-system pods found
	I0221 09:01:07.010865  223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0221 09:01:07.010877  223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0221 09:01:07.010887  223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0221 09:01:07.010895  223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running
	I0221 09:01:07.010902  223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running
	I0221 09:01:07.010908  223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running
	I0221 09:01:07.010925  223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running
	I0221 09:01:07.010931  223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running
	I0221 09:01:07.010939  223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running
	I0221 09:01:07.010960  223679 retry.go:31] will retry after 25.219510332s: missing components: kube-dns
	I0221 09:01:32.236004  223679 system_pods.go:86] 9 kube-system pods found
	I0221 09:01:32.236044  223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0221 09:01:32.236056  223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0221 09:01:32.236064  223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0221 09:01:32.236072  223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running
	I0221 09:01:32.236078  223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running
	I0221 09:01:32.236084  223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running
	I0221 09:01:32.236091  223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running
	I0221 09:01:32.236097  223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running
	I0221 09:01:32.236107  223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0221 09:01:32.236125  223679 retry.go:31] will retry after 35.078569648s: missing components: kube-dns
	I0221 09:02:07.320903  223679 system_pods.go:86] 9 kube-system pods found
	I0221 09:02:07.320944  223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0221 09:02:07.320955  223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0221 09:02:07.320961  223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0221 09:02:07.320967  223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running
	I0221 09:02:07.320973  223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running
	I0221 09:02:07.320977  223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running
	I0221 09:02:07.320981  223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running
	I0221 09:02:07.320985  223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running
	I0221 09:02:07.320990  223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0221 09:02:07.321002  223679 retry.go:31] will retry after 50.027701973s: missing components: kube-dns
	I0221 09:02:57.356331  223679 system_pods.go:86] 9 kube-system pods found
	I0221 09:02:57.356379  223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0221 09:02:57.356394  223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0221 09:02:57.356411  223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0221 09:02:57.356420  223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running
	I0221 09:02:57.356428  223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running
	I0221 09:02:57.356435  223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running
	I0221 09:02:57.356448  223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running
	I0221 09:02:57.356454  223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running
	I0221 09:02:57.356467  223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0221 09:02:57.356486  223679 retry.go:31] will retry after 47.463338706s: missing components: kube-dns
	I0221 09:03:44.827562  223679 system_pods.go:86] 9 kube-system pods found
	I0221 09:03:44.827595  223679 system_pods.go:89] "calico-kube-controllers-8594699699-ftdtm" [198a6a8f-4d1b-44fc-9a43-3166e582db73] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0221 09:03:44.827608  223679 system_pods.go:89] "calico-node-zcdj6" [1cde82d1-663e-4fae-ac8f-2d553d35a9ef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0221 09:03:44.827618  223679 system_pods.go:89] "coredns-64897985d-r75jc" [8b61f5f5-e695-42e1-8247-797a3d90eef7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0221 09:03:44.827630  223679 system_pods.go:89] "etcd-calico-20220221084934-6550" [64cc8094-de65-404a-9795-139c009db828] Running
	I0221 09:03:44.827637  223679 system_pods.go:89] "kube-apiserver-calico-20220221084934-6550" [1e41ec6f-83b2-41a2-abca-b142bbafbd99] Running
	I0221 09:03:44.827644  223679 system_pods.go:89] "kube-controller-manager-calico-20220221084934-6550" [bf127e58-28e4-4185-ac33-70522c41358e] Running
	I0221 09:03:44.827654  223679 system_pods.go:89] "kube-proxy-kwcvx" [8fc64598-ad3e-4332-b5ef-5024f95208ce] Running
	I0221 09:03:44.827659  223679 system_pods.go:89] "kube-scheduler-calico-20220221084934-6550" [6b4f11da-44d9-4e35-ad5b-891443e341dc] Running
	I0221 09:03:44.827674  223679 system_pods.go:89] "storage-provisioner" [35ba5260-bcd0-4f40-9953-47d2c167c12c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0221 09:03:44.830160  223679 out.go:176] 
	W0221 09:03:44.830324  223679 out.go:241] X Exiting due to GUEST_START: wait 5m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	X Exiting due to GUEST_START: wait 5m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	W0221 09:03:44.830341  223679 out.go:241] * 
	* 
	W0221 09:03:44.831471  223679 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0221 09:03:44.832903  223679 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (553.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (519.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p custom-weave-20220221084934-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p custom-weave-20220221084934-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=docker: exit status 105 (8m39.119069821s)

                                                
                                                
-- stdout --
	* [custom-weave-20220221084934-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13641
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node custom-weave-20220221084934-6550 in cluster custom-weave-20220221084934-6550
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.4 on Docker 20.10.12 ...
	  - kubelet.housekeeping-interval=5m
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring testdata/weavenet.yaml (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0221 08:54:47.458219  227869 out.go:297] Setting OutFile to fd 1 ...
	I0221 08:54:47.458326  227869 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0221 08:54:47.458338  227869 out.go:310] Setting ErrFile to fd 2...
	I0221 08:54:47.458344  227869 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0221 08:54:47.458503  227869 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/bin
	I0221 08:54:47.458917  227869 out.go:304] Setting JSON to false
	I0221 08:54:47.461070  227869 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2242,"bootTime":1645431446,"procs":806,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0221 08:54:47.461183  227869 start.go:122] virtualization: kvm guest
	I0221 08:54:47.464031  227869 out.go:176] * [custom-weave-20220221084934-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	I0221 08:54:47.464153  227869 notify.go:193] Checking for updates...
	I0221 08:54:47.465465  227869 out.go:176]   - MINIKUBE_LOCATION=13641
	I0221 08:54:47.466737  227869 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0221 08:54:47.468108  227869 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig
	I0221 08:54:47.469317  227869 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube
	I0221 08:54:47.471589  227869 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0221 08:54:47.472040  227869 config.go:176] Loaded profile config "auto-20220221084933-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4
	I0221 08:54:47.472126  227869 config.go:176] Loaded profile config "calico-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4
	I0221 08:54:47.472199  227869 config.go:176] Loaded profile config "cilium-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4
	I0221 08:54:47.472247  227869 driver.go:344] Setting default libvirt URI to qemu:///system
	I0221 08:54:47.517461  227869 docker.go:132] docker version: linux-20.10.12
	I0221 08:54:47.517586  227869 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0221 08:54:47.620138  227869 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2022-02-21 08:54:47.551657257 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0221 08:54:47.620271  227869 docker.go:237] overlay module found
	I0221 08:54:47.622372  227869 out.go:176] * Using the docker driver based on user configuration
	I0221 08:54:47.622397  227869 start.go:281] selected driver: docker
	I0221 08:54:47.622412  227869 start.go:798] validating driver "docker" against <nil>
	I0221 08:54:47.622433  227869 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0221 08:54:47.622515  227869 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0221 08:54:47.622540  227869 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0221 08:54:47.623978  227869 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0221 08:54:47.624791  227869 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0221 08:54:47.725034  227869 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2022-02-21 08:54:47.66170668 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0221 08:54:47.725164  227869 start_flags.go:288] no existing cluster config was found, will generate one from the flags 
	I0221 08:54:47.725316  227869 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	I0221 08:54:47.725345  227869 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0221 08:54:47.725369  227869 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0221 08:54:47.725389  227869 start_flags.go:297] Found "testdata/weavenet.yaml" CNI - setting NetworkPlugin=cni
	I0221 08:54:47.725399  227869 start_flags.go:302] config:
	{Name:custom-weave-20220221084934-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:custom-weave-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0221 08:54:47.727724  227869 out.go:176] * Starting control plane node custom-weave-20220221084934-6550 in cluster custom-weave-20220221084934-6550
	I0221 08:54:47.727767  227869 cache.go:120] Beginning downloading kic base image for docker with docker
	I0221 08:54:47.729212  227869 out.go:176] * Pulling base image ...
	I0221 08:54:47.729243  227869 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker
	I0221 08:54:47.729280  227869 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4
	I0221 08:54:47.729295  227869 cache.go:57] Caching tarball of preloaded images
	I0221 08:54:47.729343  227869 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon
	I0221 08:54:47.729540  227869 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0221 08:54:47.729557  227869 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.4 on docker
	I0221 08:54:47.729678  227869 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/config.json ...
	I0221 08:54:47.729700  227869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/config.json: {Name:mka893c0a5ff8738d3209de71a273b5ed5f8c7fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0221 08:54:47.776587  227869 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon, skipping pull
	I0221 08:54:47.776615  227869 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 exists in daemon, skipping load
	I0221 08:54:47.776635  227869 cache.go:208] Successfully downloaded all kic artifacts
	I0221 08:54:47.776674  227869 start.go:313] acquiring machines lock for custom-weave-20220221084934-6550: {Name:mk4ea336349dcf18d26ade5ee9a9024978187ca3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0221 08:54:47.776813  227869 start.go:317] acquired machines lock for "custom-weave-20220221084934-6550" in 118.503µs
	I0221 08:54:47.776843  227869 start.go:89] Provisioning new machine with config: &{Name:custom-weave-20220221084934-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:custom-weave-20220221084934-6550 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0221 08:54:47.776919  227869 start.go:126] createHost starting for "" (driver="docker")
	I0221 08:54:47.779541  227869 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0221 08:54:47.779787  227869 start.go:160] libmachine.API.Create for "custom-weave-20220221084934-6550" (driver="docker")
	I0221 08:54:47.779820  227869 client.go:168] LocalClient.Create starting
	I0221 08:54:47.779884  227869 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem
	I0221 08:54:47.779933  227869 main.go:130] libmachine: Decoding PEM data...
	I0221 08:54:47.779958  227869 main.go:130] libmachine: Parsing certificate...
	I0221 08:54:47.780028  227869 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem
	I0221 08:54:47.780052  227869 main.go:130] libmachine: Decoding PEM data...
	I0221 08:54:47.780078  227869 main.go:130] libmachine: Parsing certificate...
	I0221 08:54:47.780404  227869 cli_runner.go:133] Run: docker network inspect custom-weave-20220221084934-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0221 08:54:47.812283  227869 cli_runner.go:180] docker network inspect custom-weave-20220221084934-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0221 08:54:47.812354  227869 network_create.go:254] running [docker network inspect custom-weave-20220221084934-6550] to gather additional debugging logs...
	I0221 08:54:47.812371  227869 cli_runner.go:133] Run: docker network inspect custom-weave-20220221084934-6550
	W0221 08:54:47.846261  227869 cli_runner.go:180] docker network inspect custom-weave-20220221084934-6550 returned with exit code 1
	I0221 08:54:47.846317  227869 network_create.go:257] error running [docker network inspect custom-weave-20220221084934-6550]: docker network inspect custom-weave-20220221084934-6550: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: custom-weave-20220221084934-6550
	I0221 08:54:47.846350  227869 network_create.go:259] output of [docker network inspect custom-weave-20220221084934-6550]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: custom-weave-20220221084934-6550
	
	** /stderr **
	I0221 08:54:47.846437  227869 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0221 08:54:47.880149  227869 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-8af72e223855 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:58:a5:dd:c8}}
	I0221 08:54:47.880989  227869 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc0006d4200] misses:0}
	I0221 08:54:47.881044  227869 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0221 08:54:47.881059  227869 network_create.go:106] attempt to create docker network custom-weave-20220221084934-6550 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0221 08:54:47.881116  227869 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220221084934-6550
	I0221 08:54:47.951115  227869 network_create.go:90] docker network custom-weave-20220221084934-6550 192.168.58.0/24 created
	I0221 08:54:47.951148  227869 kic.go:106] calculated static IP "192.168.58.2" for the "custom-weave-20220221084934-6550" container
	I0221 08:54:47.951220  227869 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0221 08:54:47.991401  227869 cli_runner.go:133] Run: docker volume create custom-weave-20220221084934-6550 --label name.minikube.sigs.k8s.io=custom-weave-20220221084934-6550 --label created_by.minikube.sigs.k8s.io=true
	I0221 08:54:48.025554  227869 oci.go:102] Successfully created a docker volume custom-weave-20220221084934-6550
	I0221 08:54:48.025643  227869 cli_runner.go:133] Run: docker run --rm --name custom-weave-20220221084934-6550-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220221084934-6550 --entrypoint /usr/bin/test -v custom-weave-20220221084934-6550:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -d /var/lib
	I0221 08:54:48.595681  227869 oci.go:106] Successfully prepared a docker volume custom-weave-20220221084934-6550
	I0221 08:54:48.595760  227869 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker
	I0221 08:54:48.595785  227869 kic.go:179] Starting extracting preloaded images to volume ...
	I0221 08:54:48.595864  227869 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20220221084934-6550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir
	I0221 08:54:54.606684  227869 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20220221084934-6550:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir: (6.010752765s)
	I0221 08:54:54.606731  227869 kic.go:188] duration metric: took 6.010943 seconds to extract preloaded images to volume
	W0221 08:54:54.606773  227869 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0221 08:54:54.606787  227869 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0221 08:54:54.606827  227869 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0221 08:54:54.713053  227869 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20220221084934-6550 --name custom-weave-20220221084934-6550 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220221084934-6550 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20220221084934-6550 --network custom-weave-20220221084934-6550 --ip 192.168.58.2 --volume custom-weave-20220221084934-6550:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2
	I0221 08:54:55.197249  227869 cli_runner.go:133] Run: docker container inspect custom-weave-20220221084934-6550 --format={{.State.Running}}
	I0221 08:54:55.251551  227869 cli_runner.go:133] Run: docker container inspect custom-weave-20220221084934-6550 --format={{.State.Status}}
	I0221 08:54:55.285366  227869 cli_runner.go:133] Run: docker exec custom-weave-20220221084934-6550 stat /var/lib/dpkg/alternatives/iptables
	I0221 08:54:55.364656  227869 oci.go:281] the created container "custom-weave-20220221084934-6550" has a running status.
	I0221 08:54:55.364693  227869 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/custom-weave-20220221084934-6550/id_rsa...
	I0221 08:54:55.460289  227869 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/custom-weave-20220221084934-6550/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0221 08:54:55.569379  227869 cli_runner.go:133] Run: docker container inspect custom-weave-20220221084934-6550 --format={{.State.Status}}
	I0221 08:54:55.607358  227869 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0221 08:54:55.607386  227869 kic_runner.go:114] Args: [docker exec --privileged custom-weave-20220221084934-6550 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0221 08:54:55.707944  227869 cli_runner.go:133] Run: docker container inspect custom-weave-20220221084934-6550 --format={{.State.Status}}
	I0221 08:54:55.746584  227869 machine.go:88] provisioning docker machine ...
	I0221 08:54:55.746625  227869 ubuntu.go:169] provisioning hostname "custom-weave-20220221084934-6550"
	I0221 08:54:55.746679  227869 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220221084934-6550
	I0221 08:54:55.782136  227869 main.go:130] libmachine: Using SSH client type: native
	I0221 08:54:55.782378  227869 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1100] 0x7a41e0 <nil>  [] 0s} 127.0.0.1 49369 <nil> <nil>}
	I0221 08:54:55.782408  227869 main.go:130] libmachine: About to run SSH command:
	sudo hostname custom-weave-20220221084934-6550 && echo "custom-weave-20220221084934-6550" | sudo tee /etc/hostname
	I0221 08:54:55.920475  227869 main.go:130] libmachine: SSH cmd err, output: <nil>: custom-weave-20220221084934-6550
	
	I0221 08:54:55.920553  227869 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220221084934-6550
	I0221 08:54:55.975664  227869 main.go:130] libmachine: Using SSH client type: native
	I0221 08:54:55.975866  227869 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1100] 0x7a41e0 <nil>  [] 0s} 127.0.0.1 49369 <nil> <nil>}
	I0221 08:54:55.975900  227869 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-weave-20220221084934-6550' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-weave-20220221084934-6550/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-weave-20220221084934-6550' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0221 08:54:56.102934  227869 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0221 08:54:56.102974  227869 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube}
	I0221 08:54:56.103020  227869 ubuntu.go:177] setting up certificates
	I0221 08:54:56.103036  227869 provision.go:83] configureAuth start
	I0221 08:54:56.103092  227869 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220221084934-6550
	I0221 08:54:56.140749  227869 provision.go:138] copyHostCerts
	I0221 08:54:56.140814  227869 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem, removing ...
	I0221 08:54:56.140828  227869 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem
	I0221 08:54:56.140916  227869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.pem (1082 bytes)
	I0221 08:54:56.141002  227869 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem, removing ...
	I0221 08:54:56.141016  227869 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem
	I0221 08:54:56.141053  227869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cert.pem (1123 bytes)
	I0221 08:54:56.141122  227869 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem, removing ...
	I0221 08:54:56.141135  227869 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem
	I0221 08:54:56.141163  227869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/key.pem (1675 bytes)
	I0221 08:54:56.141225  227869 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem org=jenkins.custom-weave-20220221084934-6550 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube custom-weave-20220221084934-6550]
	I0221 08:54:56.326607  227869 provision.go:172] copyRemoteCerts
	I0221 08:54:56.326675  227869 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0221 08:54:56.326718  227869 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220221084934-6550
	I0221 08:54:56.363092  227869 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/custom-weave-20220221084934-6550/id_rsa Username:docker}
	I0221 08:54:56.452714  227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0221 08:54:56.472983  227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0221 08:54:56.494894  227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0221 08:54:56.515723  227869 provision.go:86] duration metric: configureAuth took 412.669796ms
	I0221 08:54:56.515755  227869 ubuntu.go:193] setting minikube options for container-runtime
	I0221 08:54:56.515964  227869 config.go:176] Loaded profile config "custom-weave-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4
	I0221 08:54:56.516026  227869 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220221084934-6550
	I0221 08:54:56.553857  227869 main.go:130] libmachine: Using SSH client type: native
	I0221 08:54:56.554015  227869 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1100] 0x7a41e0 <nil>  [] 0s} 127.0.0.1 49369 <nil> <nil>}
	I0221 08:54:56.554037  227869 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0221 08:54:56.675412  227869 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0221 08:54:56.675444  227869 ubuntu.go:71] root file system type: overlay
	I0221 08:54:56.675646  227869 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0221 08:54:56.675703  227869 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220221084934-6550
	I0221 08:54:56.714231  227869 main.go:130] libmachine: Using SSH client type: native
	I0221 08:54:56.714406  227869 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1100] 0x7a41e0 <nil>  [] 0s} 127.0.0.1 49369 <nil> <nil>}
	I0221 08:54:56.714509  227869 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0221 08:54:56.855829  227869 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0221 08:54:56.855929  227869 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220221084934-6550
	I0221 08:54:56.893976  227869 main.go:130] libmachine: Using SSH client type: native
	I0221 08:54:56.894175  227869 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1100] 0x7a41e0 <nil>  [] 0s} 127.0.0.1 49369 <nil> <nil>}
	I0221 08:54:56.894198  227869 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0221 08:54:57.579128  227869 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-12-13 11:43:42.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-02-21 08:54:56.850898043 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0221 08:54:57.579162  227869 machine.go:91] provisioned docker machine in 1.832554133s
	I0221 08:54:57.579173  227869 client.go:171] LocalClient.Create took 9.799347142s
	I0221 08:54:57.579189  227869 start.go:168] duration metric: libmachine.API.Create for "custom-weave-20220221084934-6550" took 9.79940181s
	I0221 08:54:57.579201  227869 start.go:267] post-start starting for "custom-weave-20220221084934-6550" (driver="docker")
	I0221 08:54:57.579207  227869 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0221 08:54:57.579305  227869 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0221 08:54:57.579351  227869 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220221084934-6550
	I0221 08:54:57.613063  227869 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/custom-weave-20220221084934-6550/id_rsa Username:docker}
	I0221 08:54:57.703066  227869 ssh_runner.go:195] Run: cat /etc/os-release
	I0221 08:54:57.705959  227869 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0221 08:54:57.705980  227869 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0221 08:54:57.705991  227869 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0221 08:54:57.705996  227869 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0221 08:54:57.706004  227869 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/addons for local assets ...
	I0221 08:54:57.706050  227869 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files for local assets ...
	I0221 08:54:57.706110  227869 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem -> 65502.pem in /etc/ssl/certs
	I0221 08:54:57.706179  227869 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0221 08:54:57.713029  227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /etc/ssl/certs/65502.pem (1708 bytes)
	I0221 08:54:57.731016  227869 start.go:270] post-start completed in 151.786403ms
	I0221 08:54:57.731352  227869 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220221084934-6550
	I0221 08:54:57.764434  227869 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/config.json ...
	I0221 08:54:57.764715  227869 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0221 08:54:57.764768  227869 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220221084934-6550
	I0221 08:54:57.796823  227869 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/custom-weave-20220221084934-6550/id_rsa Username:docker}
	I0221 08:54:57.883538  227869 start.go:129] duration metric: createHost completed in 10.106607266s
	I0221 08:54:57.883571  227869 start.go:80] releasing machines lock for "custom-weave-20220221084934-6550", held for 10.106740513s
	I0221 08:54:57.883662  227869 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220221084934-6550
	I0221 08:54:57.916447  227869 ssh_runner.go:195] Run: systemctl --version
	I0221 08:54:57.916504  227869 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220221084934-6550
	I0221 08:54:57.916539  227869 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0221 08:54:57.916595  227869 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220221084934-6550
	I0221 08:54:57.952282  227869 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/custom-weave-20220221084934-6550/id_rsa Username:docker}
	I0221 08:54:57.953012  227869 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/custom-weave-20220221084934-6550/id_rsa Username:docker}
	I0221 08:54:58.182655  227869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0221 08:54:58.192269  227869 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0221 08:54:58.201710  227869 cruntime.go:272] skipping containerd shutdown because we are bound to it
	I0221 08:54:58.201772  227869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0221 08:54:58.217490  227869 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0221 08:54:58.236241  227869 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0221 08:54:58.328534  227869 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0221 08:54:58.405690  227869 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0221 08:54:58.418618  227869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0221 08:54:58.507435  227869 ssh_runner.go:195] Run: sudo systemctl start docker
	I0221 08:54:58.517435  227869 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0221 08:54:58.555565  227869 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0221 08:54:58.596881  227869 out.go:203] * Preparing Kubernetes v1.23.4 on Docker 20.10.12 ...
	I0221 08:54:58.596957  227869 cli_runner.go:133] Run: docker network inspect custom-weave-20220221084934-6550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0221 08:54:58.628733  227869 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0221 08:54:58.632087  227869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0221 08:54:58.643526  227869 out.go:176]   - kubelet.housekeeping-interval=5m
	I0221 08:54:58.643605  227869 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker
	I0221 08:54:58.643653  227869 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0221 08:54:58.675389  227869 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.4
	k8s.gcr.io/kube-proxy:v1.23.4
	k8s.gcr.io/kube-scheduler:v1.23.4
	k8s.gcr.io/kube-controller-manager:v1.23.4
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0221 08:54:58.675418  227869 docker.go:537] Images already preloaded, skipping extraction
	I0221 08:54:58.675488  227869 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0221 08:54:58.708483  227869 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.4
	k8s.gcr.io/kube-proxy:v1.23.4
	k8s.gcr.io/kube-scheduler:v1.23.4
	k8s.gcr.io/kube-controller-manager:v1.23.4
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0221 08:54:58.708509  227869 cache_images.go:84] Images are preloaded, skipping loading
	I0221 08:54:58.708561  227869 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0221 08:54:58.791115  227869 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0221 08:54:58.791158  227869 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0221 08:54:58.791174  227869 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-weave-20220221084934-6550 NodeName:custom-weave-20220221084934-6550 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0221 08:54:58.791341  227869 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "custom-weave-20220221084934-6550"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0221 08:54:58.791445  227869 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=custom-weave-20220221084934-6550 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.4 ClusterName:custom-weave-20220221084934-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:}
	I0221 08:54:58.791498  227869 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.4
	I0221 08:54:58.798800  227869 binaries.go:44] Found k8s binaries, skipping transfer
	I0221 08:54:58.799251  227869 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0221 08:54:58.807147  227869 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (406 bytes)
	I0221 08:54:58.820224  227869 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0221 08:54:58.833088  227869 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2054 bytes)
	I0221 08:54:58.846338  227869 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0221 08:54:58.849240  227869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0221 08:54:58.858694  227869 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550 for IP: 192.168.58.2
	I0221 08:54:58.858805  227869 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key
	I0221 08:54:58.858840  227869 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key
	I0221 08:54:58.858885  227869 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/client.key
	I0221 08:54:58.858898  227869 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/client.crt with IP's: []
	I0221 08:54:59.108630  227869 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/client.crt ...
	I0221 08:54:59.108671  227869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/client.crt: {Name:mk10a31cfb47f6cf3f7da307f7bac4d74ffcf445 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0221 08:54:59.108910  227869 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/client.key ...
	I0221 08:54:59.108933  227869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/client.key: {Name:mke61651e1bae31960788075de046902ba3a384d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0221 08:54:59.109066  227869 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/apiserver.key.cee25041
	I0221 08:54:59.109088  227869 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0221 08:54:59.505500  227869 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/apiserver.crt.cee25041 ...
	I0221 08:54:59.505538  227869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/apiserver.crt.cee25041: {Name:mkbc006409aa5d703ce8a53644ff64d9eca16a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0221 08:54:59.505785  227869 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/apiserver.key.cee25041 ...
	I0221 08:54:59.505805  227869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/apiserver.key.cee25041: {Name:mkad1017a3ef8cd68460d4665ab5aa6e577c7d2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0221 08:54:59.505895  227869 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/apiserver.crt
	I0221 08:54:59.505949  227869 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/apiserver.key
	I0221 08:54:59.506011  227869 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/proxy-client.key
	I0221 08:54:59.506028  227869 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/proxy-client.crt with IP's: []
	I0221 08:54:59.595538  227869 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/proxy-client.crt ...
	I0221 08:54:59.595578  227869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/proxy-client.crt: {Name:mk42c1b2b0663ef91b5f6118e4e09fad281d7665 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0221 08:54:59.595806  227869 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/proxy-client.key ...
	I0221 08:54:59.595823  227869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/proxy-client.key: {Name:mk2f72a2c489551e30437a2aea9d0cb930af0fc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0221 08:54:59.595993  227869 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem (1338 bytes)
	W0221 08:54:59.596029  227869 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550_empty.pem, impossibly tiny 0 bytes
	I0221 08:54:59.596043  227869 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca-key.pem (1679 bytes)
	I0221 08:54:59.596096  227869 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/ca.pem (1082 bytes)
	I0221 08:54:59.596127  227869 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/cert.pem (1123 bytes)
	I0221 08:54:59.596151  227869 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/key.pem (1675 bytes)
	I0221 08:54:59.596191  227869 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem (1708 bytes)
	I0221 08:54:59.597036  227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0221 08:54:59.616277  227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0221 08:54:59.637516  227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0221 08:54:59.655614  227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/custom-weave-20220221084934-6550/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0221 08:54:59.673516  227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0221 08:54:59.691562  227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0221 08:54:59.709384  227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0221 08:54:59.731673  227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0221 08:54:59.749383  227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0221 08:54:59.768558  227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/certs/6550.pem --> /usr/share/ca-certificates/6550.pem (1338 bytes)
	I0221 08:54:59.785931  227869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/ssl/certs/65502.pem --> /usr/share/ca-certificates/65502.pem (1708 bytes)
	I0221 08:54:59.803428  227869 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0221 08:54:59.816515  227869 ssh_runner.go:195] Run: openssl version
	I0221 08:54:59.821519  227869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6550.pem && ln -fs /usr/share/ca-certificates/6550.pem /etc/ssl/certs/6550.pem"
	I0221 08:54:59.829127  227869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6550.pem
	I0221 08:54:59.832411  227869 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb 21 08:30 /usr/share/ca-certificates/6550.pem
	I0221 08:54:59.832456  227869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6550.pem
	I0221 08:54:59.837155  227869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6550.pem /etc/ssl/certs/51391683.0"
	I0221 08:54:59.844619  227869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65502.pem && ln -fs /usr/share/ca-certificates/65502.pem /etc/ssl/certs/65502.pem"
	I0221 08:54:59.852034  227869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65502.pem
	I0221 08:54:59.855268  227869 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb 21 08:30 /usr/share/ca-certificates/65502.pem
	I0221 08:54:59.855304  227869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65502.pem
	I0221 08:54:59.860269  227869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65502.pem /etc/ssl/certs/3ec20f2e.0"
	I0221 08:54:59.867781  227869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0221 08:54:59.875277  227869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0221 08:54:59.878320  227869 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb 21 08:26 /usr/share/ca-certificates/minikubeCA.pem
	I0221 08:54:59.878371  227869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0221 08:54:59.883480  227869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0221 08:54:59.891452  227869 kubeadm.go:391] StartCluster: {Name:custom-weave-20220221084934-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:custom-weave-20220221084934-6550 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0221 08:54:59.891586  227869 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0221 08:54:59.924799  227869 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0221 08:54:59.932091  227869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0221 08:54:59.939371  227869 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0221 08:54:59.939430  227869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0221 08:54:59.947372  227869 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0221 08:54:59.947423  227869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0221 08:55:00.482705  227869 out.go:203]   - Generating certificates and keys ...
	I0221 08:55:03.685435  227869 out.go:203]   - Booting up control plane ...
	I0221 08:55:10.727547  227869 out.go:203]   - Configuring RBAC rules ...
	I0221 08:55:11.151901  227869 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0221 08:55:11.154044  227869 out.go:176] * Configuring testdata/weavenet.yaml (Container Networking Interface) ...
	I0221 08:55:11.154111  227869 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.4/kubectl ...
	I0221 08:55:11.154161  227869 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I0221 08:55:11.207872  227869 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/tmp/minikube/cni.yaml': No such file or directory
	I0221 08:55:11.207908  227869 ssh_runner.go:362] scp testdata/weavenet.yaml --> /var/tmp/minikube/cni.yaml (10948 bytes)
	I0221 08:55:11.231141  227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0221 08:55:12.304984  227869 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.073803299s)
	I0221 08:55:12.305050  227869 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0221 08:55:12.305176  227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:12.305176  227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=9a4c39b14752448ee8b5da7f8bb397c2a16d9ea9 minikube.k8s.io/name=custom-weave-20220221084934-6550 minikube.k8s.io/updated_at=2022_02_21T08_55_12_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:12.403260  227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:12.403289  227869 ops.go:34] apiserver oom_adj: -16
	I0221 08:55:12.963301  227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:13.462762  227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:13.963185  227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:14.463531  227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:14.962764  227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:15.463397  227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:15.963546  227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:16.462752  227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:16.963400  227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:17.463637  227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:17.963168  227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:18.463128  227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:18.962774  227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:19.463663  227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:19.962811  227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:20.463551  227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:20.963554  227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:21.463298  227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:21.963457  227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:22.463549  227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:22.963434  227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:23.463347  227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:23.962843  227869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0221 08:55:24.019474  227869 kubeadm.go:1020] duration metric: took 11.714385799s to wait for elevateKubeSystemPrivileges.
	I0221 08:55:24.019508  227869 kubeadm.go:393] StartCluster complete in 24.128063045s
	I0221 08:55:24.019531  227869 settings.go:142] acquiring lock: {Name:mk4400923ef35d7d80e21aa000bc7683aef0fb09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0221 08:55:24.019619  227869 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig
	I0221 08:55:24.020875  227869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig: {Name:mkbaf7b3f4ffbe8a9d57707d423380523fa909e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0221 08:55:24.035745  227869 kapi.go:233] failed rescaling deployment, will retry: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0221 08:55:25.038511  227869 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "custom-weave-20220221084934-6550" rescaled to 1
	I0221 08:55:25.038569  227869 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0221 08:55:25.041496  227869 out.go:176] * Verifying Kubernetes components...
	I0221 08:55:25.038653  227869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0221 08:55:25.041566  227869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0221 08:55:25.038656  227869 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0221 08:55:25.041635  227869 addons.go:65] Setting storage-provisioner=true in profile "custom-weave-20220221084934-6550"
	I0221 08:55:25.039253  227869 config.go:176] Loaded profile config "custom-weave-20220221084934-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4
	I0221 08:55:25.041657  227869 addons.go:153] Setting addon storage-provisioner=true in "custom-weave-20220221084934-6550"
	W0221 08:55:25.041668  227869 addons.go:165] addon storage-provisioner should already be in state true
	I0221 08:55:25.041708  227869 host.go:66] Checking if "custom-weave-20220221084934-6550" exists ...
	I0221 08:55:25.041706  227869 addons.go:65] Setting default-storageclass=true in profile "custom-weave-20220221084934-6550"
	I0221 08:55:25.041747  227869 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-weave-20220221084934-6550"
	I0221 08:55:25.042057  227869 cli_runner.go:133] Run: docker container inspect custom-weave-20220221084934-6550 --format={{.State.Status}}
	I0221 08:55:25.042294  227869 cli_runner.go:133] Run: docker container inspect custom-weave-20220221084934-6550 --format={{.State.Status}}
	I0221 08:55:25.057925  227869 node_ready.go:35] waiting up to 5m0s for node "custom-weave-20220221084934-6550" to be "Ready" ...
	I0221 08:55:25.062489  227869 node_ready.go:49] node "custom-weave-20220221084934-6550" has status "Ready":"True"
	I0221 08:55:25.062517  227869 node_ready.go:38] duration metric: took 4.554004ms waiting for node "custom-weave-20220221084934-6550" to be "Ready" ...
	I0221 08:55:25.062529  227869 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0221 08:55:25.075842  227869 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-fw5hd" in "kube-system" namespace to be "Ready" ...
	I0221 08:55:25.091233  227869 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0221 08:55:25.091370  227869 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0221 08:55:25.091386  227869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0221 08:55:25.091440  227869 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220221084934-6550
	I0221 08:55:25.103387  227869 addons.go:153] Setting addon default-storageclass=true in "custom-weave-20220221084934-6550"
	W0221 08:55:25.103416  227869 addons.go:165] addon default-storageclass should already be in state true
	I0221 08:55:25.103439  227869 host.go:66] Checking if "custom-weave-20220221084934-6550" exists ...
	I0221 08:55:25.103789  227869 cli_runner.go:133] Run: docker container inspect custom-weave-20220221084934-6550 --format={{.State.Status}}
	I0221 08:55:25.136464  227869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0221 08:55:25.138654  227869 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/custom-weave-20220221084934-6550/id_rsa Username:docker}
	I0221 08:55:25.154985  227869 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0221 08:55:25.155049  227869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0221 08:55:25.155102  227869 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220221084934-6550
	I0221 08:55:25.188302  227869 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/custom-weave-20220221084934-6550/id_rsa Username:docker}
	I0221 08:55:25.323710  227869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0221 08:55:25.509102  227869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0221 08:55:25.628703  227869 start.go:777] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0221 08:55:26.031236  227869 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0221 08:55:26.031270  227869 addons.go:417] enableAddons completed in 992.622832ms
	I0221 08:55:27.093638  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:55:29.095472  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:55:31.106114  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:55:33.593883  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:55:35.603309  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:55:38.094303  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:55:40.594209  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:55:43.094975  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:55:45.594422  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:55:48.094138  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:55:50.094339  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:55:52.593954  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:55:55.094041  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:55:57.094158  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:55:59.594464  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:01.594499  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:03.595044  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:06.096228  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:08.594008  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:10.594274  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:12.594837  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:15.094474  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:17.095174  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:19.595203  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:22.094022  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:24.094532  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:26.594351  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:29.094290  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:31.595545  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:34.094168  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:36.094581  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:38.593443  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:40.593849  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:42.594084  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:44.594768  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:47.093943  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:49.593364  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:51.593995  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:53.594291  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:55.594982  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:56:57.595281  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:00.095968  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:02.593875  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:05.095863  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:07.593598  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:09.595599  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:11.600301  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:14.093831  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:16.094542  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:18.094583  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:20.594516  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:23.094746  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:25.094898  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:27.096067  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:29.594682  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:31.595072  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:34.093783  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:36.095122  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:38.593566  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:40.593916  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:42.594575  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:44.594678  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:46.594775  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:49.093600  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:51.093716  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:53.594138  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:55.594195  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:57:58.094464  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:00.594174  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:03.094260  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:05.097983  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:07.594946  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:10.095115  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:12.593715  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:14.594295  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:17.097192  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:19.593497  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:21.593740  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:23.594026  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:26.094324  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:28.594956  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:31.094580  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:33.593910  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:35.595299  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:38.093960  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:40.094102  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:42.095073  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:44.593597  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:46.594499  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:48.594616  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:50.594840  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:53.094539  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:55.094604  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:57.593439  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:58:59.593598  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:59:01.594070  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:59:04.094375  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:59:06.593739  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:59:08.594057  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:59:10.594906  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:59:12.595167  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:59:15.094611  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:59:17.594243  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:59:20.094535  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:59:22.095445  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:59:24.593641  227869 pod_ready.go:102] pod "coredns-64897985d-fw5hd" in "kube-system" namespace has status "Ready":"False"
	I0221 08:59:25.099642  227869 pod_ready.go:81] duration metric: took 4m0.023714023s waiting for pod "coredns-64897985d-fw5hd" in "kube-system" namespace to be "Ready" ...
	E0221 08:59:25.099664  227869 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0221 08:59:25.099673  227869 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-kn627" in "kube-system" namespace to be "Ready" ...
	I0221 08:59:25.101152  227869 pod_ready.go:97] error getting pod "coredns-64897985d-kn627" in "kube-system" namespace (skipping!): pods "coredns-64897985d-kn627" not found
	I0221 08:59:25.101173  227869 pod_ready.go:81] duration metric: took 1.494584ms waiting for pod "coredns-64897985d-kn627" in "kube-system" namespace to be "Ready" ...
	E0221 08:59:25.101182  227869 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-64897985d-kn627" in "kube-system" namespace (skipping!): pods "coredns-64897985d-kn627" not found
	I0221 08:59:25.101190  227869 pod_ready.go:78] waiting up to 5m0s for pod "etcd-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ...
	I0221 08:59:25.105178  227869 pod_ready.go:92] pod "etcd-custom-weave-20220221084934-6550" in "kube-system" namespace has status "Ready":"True"
	I0221 08:59:25.105196  227869 pod_ready.go:81] duration metric: took 3.99997ms waiting for pod "etcd-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ...
	I0221 08:59:25.105204  227869 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ...
	I0221 08:59:25.109930  227869 pod_ready.go:92] pod "kube-apiserver-custom-weave-20220221084934-6550" in "kube-system" namespace has status "Ready":"True"
	I0221 08:59:25.109949  227869 pod_ready.go:81] duration metric: took 4.739462ms waiting for pod "kube-apiserver-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ...
	I0221 08:59:25.109958  227869 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ...
	I0221 08:59:25.292675  227869 pod_ready.go:92] pod "kube-controller-manager-custom-weave-20220221084934-6550" in "kube-system" namespace has status "Ready":"True"
	I0221 08:59:25.292711  227869 pod_ready.go:81] duration metric: took 182.734028ms waiting for pod "kube-controller-manager-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ...
	I0221 08:59:25.292723  227869 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-q4stn" in "kube-system" namespace to be "Ready" ...
	I0221 08:59:25.691815  227869 pod_ready.go:92] pod "kube-proxy-q4stn" in "kube-system" namespace has status "Ready":"True"
	I0221 08:59:25.691839  227869 pod_ready.go:81] duration metric: took 399.108423ms waiting for pod "kube-proxy-q4stn" in "kube-system" namespace to be "Ready" ...
	I0221 08:59:25.691848  227869 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ...
	I0221 08:59:26.092539  227869 pod_ready.go:92] pod "kube-scheduler-custom-weave-20220221084934-6550" in "kube-system" namespace has status "Ready":"True"
	I0221 08:59:26.092566  227869 pod_ready.go:81] duration metric: took 400.710732ms waiting for pod "kube-scheduler-custom-weave-20220221084934-6550" in "kube-system" namespace to be "Ready" ...
	I0221 08:59:26.092579  227869 pod_ready.go:78] waiting up to 5m0s for pod "weave-net-dgkzh" in "kube-system" namespace to be "Ready" ...
	I0221 08:59:28.498990  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 08:59:30.998871  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 08:59:33.499218  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 08:59:35.998834  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 08:59:38.498252  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 08:59:40.499308  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 08:59:42.998921  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 08:59:45.498291  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 08:59:47.498914  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 08:59:49.998220  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 08:59:51.999087  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 08:59:53.999129  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 08:59:56.497881  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 08:59:58.498148  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:00:00.999242  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:00:03.498525  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:00:05.999154  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:00:08.498881  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:00:10.998464  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:00:12.998682  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:00:14.999363  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:00:17.498767  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:00:19.499481  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:00:21.998971  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:00:24.499960  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:00:26.999269  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:00:29.499198  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:00:31.998892  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:00:33.999959  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:00:36.498439  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:00:38.998551  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:00:40.998664  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:00:42.999010  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:00:45.498414  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:00:47.498620  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:00:49.998601  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:00:51.999470  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:00:54.499043  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:00:56.499562  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:00:58.998197  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:01:00.998372  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:01:02.999674  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:01:05.499244  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:01:07.998930  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:01:10.499101  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:01:12.499436  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:01:14.998244  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:01:16.998957  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:01:19.499569  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:01:21.503811  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:01:23.998532  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:01:26.001410  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:01:28.497652  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:01:30.497882  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:01:32.498505  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:01:34.499389  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:01:36.998781  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:01:39.497987  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:01:41.999075  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:01:43.999131  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:01:45.999453  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:01:48.498612  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:01:50.502349  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:01:53.000328  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:01:55.498350  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:01:57.498897  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:01:59.998589  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:02:02.498112  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:02:04.499166  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:02:06.499366  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:02:08.998138  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:02:10.998798  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:02:12.998867  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:02:14.999708  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:02:17.499134  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:02:19.998038  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:02:21.999415  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:02:24.503262  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:02:26.998872  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:02:28.999023  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:02:31.498312  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:02:33.498493  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:02:35.999270  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:02:38.499111  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:02:40.998862  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:02:43.499053  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:02:45.499484  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:02:47.499802  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:02:49.999065  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:02:51.999352  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:02:54.503567  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:02:56.998735  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:02:58.999291  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:03:00.999500  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:03:03.001366  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:03:05.498670  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:03:07.499251  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:03:09.998225  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:03:11.999084  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:03:14.499690  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:03:16.998485  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:03:19.498295  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:03:21.498521  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:03:23.499957  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:03:25.998718  227869 pod_ready.go:102] pod "weave-net-dgkzh" in "kube-system" namespace has status "Ready":"False"
	I0221 09:03:26.503352  227869 pod_ready.go:81] duration metric: took 4m0.410759109s waiting for pod "weave-net-dgkzh" in "kube-system" namespace to be "Ready" ...
	E0221 09:03:26.503375  227869 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0221 09:03:26.503381  227869 pod_ready.go:38] duration metric: took 8m1.440836229s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0221 09:03:26.503404  227869 api_server.go:51] waiting for apiserver process to appear ...
	I0221 09:03:26.505928  227869 out.go:176] 
	W0221 09:03:26.506107  227869 out.go:241] X Exiting due to K8S_APISERVER_MISSING: wait 5m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 5m0s for node: wait for apiserver proc: apiserver process never appeared
	W0221 09:03:26.506213  227869 out.go:241] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W0221 09:03:26.506230  227869 out.go:241] * Related issues:
	* Related issues:
	W0221 09:03:26.506275  227869 out.go:241]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W0221 09:03:26.506318  227869 out.go:241]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I0221 09:03:26.507855  227869 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 105
--- FAIL: TestNetworkPlugins/group/custom-weave/Start (519.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (373.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:163: (dbg) Run:  kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.166804753s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default
E0221 08:56:49.826677    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.200105307s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.141837539s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default
E0221 08:57:30.568649    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.135333876s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.15077008s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.16303599s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default
E0221 08:58:29.174144    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136895197s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.156602484s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0221 08:59:05.984234    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default
E0221 08:59:33.149060    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.249345945s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0221 08:59:33.667848    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.151779206s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0221 09:00:10.800062    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory
E0221 09:00:10.805340    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory
E0221 09:00:10.815646    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory
E0221 09:00:10.835911    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory
E0221 09:00:10.876175    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory
E0221 09:00:10.956525    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory
E0221 09:00:11.116743    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory
E0221 09:00:11.437135    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory
E0221 09:00:12.077939    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory
E0221 09:00:13.358145    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory
E0221 09:00:15.918473    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory
E0221 09:00:21.038745    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory
E0221 09:00:31.279147    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default
E0221 09:00:51.760221    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.157586177s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0221 09:01:32.721004    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:163: (dbg) Run:  kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context false-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.165653864s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/false/DNS (373.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (322.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:163: (dbg) Run:  kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.157342824s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.159318154s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0221 09:02:30.569358    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.126372066s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default
E0221 09:02:54.642100    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.150214828s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.172290442s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:163: (dbg) Run:  kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.151381391s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:163: (dbg) Run:  kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default
E0221 09:04:05.983652    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.156779625s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:163: (dbg) Run:  kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.13259579s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0221 09:04:33.148416    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:163: (dbg) Run:  kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.126335273s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0221 09:05:10.800096    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:163: (dbg) Run:  kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.133340056s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:163: (dbg) Run:  kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.133325385s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0221 09:06:16.369831    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory
E0221 09:06:16.375077    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory
E0221 09:06:16.385327    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory
E0221 09:06:16.405618    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory
E0221 09:06:16.445952    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory
E0221 09:06:16.526233    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory
E0221 09:06:16.686635    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory
E0221 09:06:17.007118    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory
E0221 09:06:17.648208    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory
E0221 09:06:18.928460    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:163: (dbg) Run:  kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context auto-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.144808472s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/auto/DNS (322.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (352.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.200766854s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.148465058s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.140488104s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136630445s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.148676284s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.129512436s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0221 09:05:38.483426    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.128902243s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.12772282s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0221 09:06:21.489618    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory
E0221 09:06:26.610648    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory
E0221 09:06:32.220908    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory
E0221 09:06:36.851605    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.137517782s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0221 09:06:57.332109    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default
E0221 09:07:30.569068    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory
E0221 09:07:38.292715    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.151937061s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.14103386s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default
E0221 09:09:33.149088    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kindnet-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.253102507s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/kindnet/DNS (352.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (360.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.155817581s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default
E0221 09:09:00.213316    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory
E0221 09:09:05.984497    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.137360185s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.127057858s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136062008s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.189892235s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default
E0221 09:10:10.799921    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.143725941s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0221 09:10:29.028950    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory
E0221 09:10:33.614245    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.128824565s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.1416818s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.133379348s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0221 09:11:44.054101    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.144342485s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.122491787s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0221 09:13:07.990646    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default
E0221 09:14:29.911677    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.278098961s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/enable-default-cni/DNS (360.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (281.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.160193644s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.127958606s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.146887863s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.130719111s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.15894252s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.127007778s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.13905095s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default
E0221 09:11:16.369953    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.132482174s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default
E0221 09:11:46.065538    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory
E0221 09:11:46.071578    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory
E0221 09:11:46.082474    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory
E0221 09:11:46.103250    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory
E0221 09:11:46.144057    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory
E0221 09:11:46.225233    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory
E0221 09:11:46.386034    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory
E0221 09:11:46.706601    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory
E0221 09:11:47.347094    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory
E0221 09:11:48.628104    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory
E0221 09:11:51.188585    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory
E0221 09:11:56.308747    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.138079671s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.125001889s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0221 09:12:30.568932    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory
E0221 09:12:36.193616    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default
E0221 09:13:29.174402    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.143988318s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/bridge/DNS (281.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (370.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.148930134s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.141584405s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.12948673s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.132475766s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.14308808s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0221 09:13:52.327955    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default
E0221 09:13:55.511603    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory
E0221 09:14:02.568935    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory
E0221 09:14:05.984218    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.147684786s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0221 09:14:15.992056    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default
E0221 09:14:23.049680    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.177565411s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0221 09:14:33.149259    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136446775s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0221 09:15:04.010193    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory
E0221 09:15:10.799839    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.123428364s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.13232504s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0221 09:16:16.370294    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory
E0221 09:16:18.873156    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory
E0221 09:16:25.930686    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory
E0221 09:16:33.843826    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory
E0221 09:16:46.065510    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.140494652s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0221 09:17:13.752037    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory
E0221 09:17:30.568686    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default
E0221 09:18:27.320235    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory
E0221 09:18:27.325462    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory
E0221 09:18:27.335697    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory
E0221 09:18:27.355954    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory
E0221 09:18:27.396226    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory
E0221 09:18:27.476519    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory
E0221 09:18:27.636908    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory
E0221 09:18:27.957431    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory
E0221 09:18:28.598254    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory
E0221 09:18:29.174160    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory
E0221 09:18:29.878647    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory
E0221 09:18:32.439685    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory
E0221 09:18:35.029781    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory
E0221 09:18:37.560789    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220221084933-6550 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.137813409s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/kubenet/DNS (370.31s)
E0221 09:21:11.163791    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory
E0221 09:21:16.370120    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory
E0221 09:21:46.065878    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory
E0221 09:21:58.037581    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory
E0221 09:21:58.042867    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory
E0221 09:21:58.053122    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory
E0221 09:21:58.073463    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory
E0221 09:21:58.113730    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory
E0221 09:21:58.194079    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory
E0221 09:21:58.354490    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory
E0221 09:21:58.675033    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory
E0221 09:21:59.315792    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory
E0221 09:22:00.596200    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory
E0221 09:22:03.157230    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory
E0221 09:22:08.278024    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory
E0221 09:22:16.217751    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory
E0221 09:22:16.223072    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory
E0221 09:22:16.233301    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory
E0221 09:22:16.253567    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory
E0221 09:22:16.293876    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory
E0221 09:22:16.374850    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory
E0221 09:22:16.535255    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory
E0221 09:22:16.855806    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory
E0221 09:22:17.496352    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory
E0221 09:22:18.518194    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory
E0221 09:22:18.776582    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory
E0221 09:22:21.337172    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory
E0221 09:22:26.457346    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory
E0221 09:22:30.568387    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory
E0221 09:22:36.698397    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory
E0221 09:22:38.998520    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory
E0221 09:22:39.415283    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/false-20220221084934-6550/client.crt: no such file or directory
E0221 09:22:57.179497    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory
E0221 09:23:12.221760    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory
E0221 09:23:19.959136    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory
E0221 09:23:27.319752    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory
E0221 09:23:29.174104    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory
E0221 09:23:35.029378    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory

                                                
                                    

Test pass (250/279)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 5.52
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.23.4/json-events 9.46
11 TestDownloadOnly/v1.23.4/preload-exists 0
15 TestDownloadOnly/v1.23.4/LogsDuration 0.07
17 TestDownloadOnly/v1.23.5-rc.0/json-events 17.37
20 TestDownloadOnly/v1.23.5-rc.0/binaries 0
22 TestDownloadOnly/v1.23.5-rc.0/LogsDuration 0.08
23 TestDownloadOnly/DeleteAll 0.33
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.21
25 TestDownloadOnlyKic 27.45
26 TestBinaryMirror 0.86
27 TestOffline 65.99
29 TestAddons/Setup 139.98
31 TestAddons/parallel/Registry 25.08
32 TestAddons/parallel/Ingress 29.9
33 TestAddons/parallel/MetricsServer 5.67
34 TestAddons/parallel/HelmTiller 16.58
36 TestAddons/parallel/CSI 42.88
38 TestAddons/serial/GCPAuth 46.35
39 TestAddons/StoppedEnableDisable 11.34
40 TestCertOptions 37.18
41 TestCertExpiration 221.56
42 TestDockerFlags 41.63
43 TestForceSystemdFlag 41.42
44 TestForceSystemdEnv 42.01
45 TestKVMDriverInstallOrUpdate 8.4
49 TestErrorSpam/setup 26
50 TestErrorSpam/start 0.89
51 TestErrorSpam/status 1.14
52 TestErrorSpam/pause 1.47
53 TestErrorSpam/unpause 1.59
54 TestErrorSpam/stop 10.96
57 TestFunctional/serial/CopySyncFile 0
58 TestFunctional/serial/StartWithProxy 42.73
59 TestFunctional/serial/AuditLog 0
60 TestFunctional/serial/SoftStart 5.6
61 TestFunctional/serial/KubeContext 0.03
62 TestFunctional/serial/KubectlGetPods 0.17
65 TestFunctional/serial/CacheCmd/cache/add_remote 8.28
66 TestFunctional/serial/CacheCmd/cache/add_local 2.65
67 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.06
68 TestFunctional/serial/CacheCmd/cache/list 0.06
69 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.47
70 TestFunctional/serial/CacheCmd/cache/cache_reload 2.81
71 TestFunctional/serial/CacheCmd/cache/delete 0.13
72 TestFunctional/serial/MinikubeKubectlCmd 0.11
73 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
74 TestFunctional/serial/ExtraConfig 28.1
75 TestFunctional/serial/ComponentHealth 0.06
76 TestFunctional/serial/LogsCmd 1.3
77 TestFunctional/serial/LogsFileCmd 1.28
79 TestFunctional/parallel/ConfigCmd 0.43
80 TestFunctional/parallel/DashboardCmd 3.69
81 TestFunctional/parallel/DryRun 0.51
82 TestFunctional/parallel/InternationalLanguage 0.22
83 TestFunctional/parallel/StatusCmd 1.44
86 TestFunctional/parallel/ServiceCmd 24.18
87 TestFunctional/parallel/AddonsCmd 0.26
88 TestFunctional/parallel/PersistentVolumeClaim 45.73
90 TestFunctional/parallel/SSHCmd 0.74
91 TestFunctional/parallel/CpCmd 1.51
92 TestFunctional/parallel/MySQL 27.81
93 TestFunctional/parallel/FileSync 0.42
94 TestFunctional/parallel/CertSync 2.34
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.38
102 TestFunctional/parallel/ProfileCmd/profile_not_create 0.54
103 TestFunctional/parallel/Version/short 0.06
104 TestFunctional/parallel/Version/components 1.17
105 TestFunctional/parallel/ProfileCmd/profile_list 0.5
106 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
107 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
108 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
109 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
110 TestFunctional/parallel/ImageCommands/ImageBuild 3.04
111 TestFunctional/parallel/ImageCommands/Setup 2.64
112 TestFunctional/parallel/ProfileCmd/profile_json_output 0.47
113 TestFunctional/parallel/DockerEnv/bash 1.3
114 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
115 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
116 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
117 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.71
118 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.2
119 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.49
120 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.12
121 TestFunctional/parallel/ImageCommands/ImageRemove 0.67
122 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.19
123 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.72
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 14.22
128 TestFunctional/parallel/MountCmd/any-port 7.57
129 TestFunctional/parallel/MountCmd/specific-port 2.44
130 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
131 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
135 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
136 TestFunctional/delete_addon-resizer_images 0.1
137 TestFunctional/delete_my-image_image 0.03
138 TestFunctional/delete_minikube_cached_images 0.03
141 TestIngressAddonLegacy/StartLegacyK8sCluster 56.35
143 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 17.15
144 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.4
145 TestIngressAddonLegacy/serial/ValidateIngressAddons 38.1
148 TestJSONOutput/start/Command 44.04
149 TestJSONOutput/start/Audit 0
151 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
152 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
154 TestJSONOutput/pause/Command 0.66
155 TestJSONOutput/pause/Audit 0
157 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
158 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
160 TestJSONOutput/unpause/Command 0.56
161 TestJSONOutput/unpause/Audit 0
163 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
166 TestJSONOutput/stop/Command 10.91
167 TestJSONOutput/stop/Audit 0
169 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
170 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
171 TestErrorJSONOutput 0.29
173 TestKicCustomNetwork/create_custom_network 29.1
174 TestKicCustomNetwork/use_default_bridge_network 29.08
175 TestKicExistingNetwork 29.89
176 TestMainNoArgs 0.06
179 TestMountStart/serial/StartWithMountFirst 5.77
180 TestMountStart/serial/VerifyMountFirst 0.33
181 TestMountStart/serial/StartWithMountSecond 5.82
182 TestMountStart/serial/VerifyMountSecond 0.34
183 TestMountStart/serial/DeleteFirst 1.75
184 TestMountStart/serial/VerifyMountPostDelete 0.33
185 TestMountStart/serial/Stop 1.27
186 TestMountStart/serial/RestartStopped 6.97
187 TestMountStart/serial/VerifyMountPostStop 0.33
190 TestMultiNode/serial/FreshStart2Nodes 86.12
191 TestMultiNode/serial/DeployApp2Nodes 5.42
192 TestMultiNode/serial/PingHostFrom2Pods 0.83
193 TestMultiNode/serial/AddNode 28.28
194 TestMultiNode/serial/ProfileList 0.37
195 TestMultiNode/serial/CopyFile 12
196 TestMultiNode/serial/StopNode 2.53
197 TestMultiNode/serial/StartAfterStop 24.65
198 TestMultiNode/serial/RestartKeepsNodes 103.61
199 TestMultiNode/serial/DeleteNode 5.37
200 TestMultiNode/serial/StopMultiNode 21.69
201 TestMultiNode/serial/RestartMultiNode 59.97
202 TestMultiNode/serial/ValidateNameConflict 29.84
207 TestPreload 115.7
209 TestScheduledStopUnix 100.2
210 TestSkaffold 72.09
212 TestInsufficientStorage 15.21
213 TestRunningBinaryUpgrade 127.8
215 TestKubernetesUpgrade 107.64
216 TestMissingContainerUpgrade 154.43
229 TestPause/serial/Start 48.04
238 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
239 TestNoKubernetes/serial/StartWithK8s 25.95
240 TestNoKubernetes/serial/StartWithStopK8s 17.53
241 TestPause/serial/SecondStartNoReconfiguration 38.98
242 TestNoKubernetes/serial/Start 7.01
243 TestNoKubernetes/serial/VerifyK8sNotRunning 0.39
244 TestNoKubernetes/serial/ProfileList 6.19
245 TestNoKubernetes/serial/Stop 1.31
246 TestNoKubernetes/serial/StartNoArgs 6.32
247 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.4
248 TestStoppedBinaryUpgrade/Setup 0.52
249 TestStoppedBinaryUpgrade/Upgrade 71.04
250 TestPause/serial/Pause 1.21
251 TestPause/serial/VerifyStatus 0.4
252 TestPause/serial/Unpause 0.96
253 TestPause/serial/PauseAgain 0.99
254 TestPause/serial/DeletePaused 3.01
255 TestNetworkPlugins/group/auto/Start 496.11
256 TestPause/serial/VerifyDeletedResources 0.88
257 TestNetworkPlugins/group/cilium/Start 97.4
258 TestStoppedBinaryUpgrade/MinikubeLogs 2.12
261 TestNetworkPlugins/group/cilium/ControllerPod 5.02
262 TestNetworkPlugins/group/cilium/KubeletFlags 0.39
263 TestNetworkPlugins/group/cilium/NetCatPod 12.91
264 TestNetworkPlugins/group/cilium/DNS 0.18
265 TestNetworkPlugins/group/cilium/Localhost 0.13
266 TestNetworkPlugins/group/cilium/HairPin 0.2
267 TestNetworkPlugins/group/false/Start 42.77
268 TestNetworkPlugins/group/false/KubeletFlags 0.41
269 TestNetworkPlugins/group/false/NetCatPod 11.21
271 TestNetworkPlugins/group/auto/KubeletFlags 0.43
272 TestNetworkPlugins/group/auto/NetCatPod 12.37
274 TestNetworkPlugins/group/kindnet/Start 48.67
275 TestNetworkPlugins/group/enable-default-cni/Start 294.69
276 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
277 TestNetworkPlugins/group/kindnet/KubeletFlags 0.43
278 TestNetworkPlugins/group/kindnet/NetCatPod 11.19
279 TestNetworkPlugins/group/bridge/Start 290.53
281 TestNetworkPlugins/group/kubenet/Start 290.28
282 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.37
283 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.22
285 TestNetworkPlugins/group/bridge/KubeletFlags 0.37
286 TestNetworkPlugins/group/bridge/NetCatPod 11.26
289 TestStartStop/group/old-k8s-version/serial/FirstStart 129.32
290 TestStartStop/group/old-k8s-version/serial/DeployApp 8.32
291 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.64
292 TestStartStop/group/old-k8s-version/serial/Stop 10.97
293 TestNetworkPlugins/group/kubenet/KubeletFlags 0.36
294 TestNetworkPlugins/group/kubenet/NetCatPod 12.3
295 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
296 TestStartStop/group/old-k8s-version/serial/SecondStart 410.35
299 TestStartStop/group/no-preload/serial/FirstStart 54.58
300 TestStartStop/group/no-preload/serial/DeployApp 8.45
301 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.88
302 TestStartStop/group/no-preload/serial/Stop 10.88
304 TestStartStop/group/embed-certs/serial/FirstStart 294.16
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
306 TestStartStop/group/no-preload/serial/SecondStart 579.08
308 TestStartStop/group/default-k8s-different-port/serial/FirstStart 291.78
309 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
310 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
311 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.43
312 TestStartStop/group/old-k8s-version/serial/Pause 3.25
314 TestStartStop/group/newest-cni/serial/FirstStart 51.72
315 TestStartStop/group/embed-certs/serial/DeployApp 12.34
316 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.64
317 TestStartStop/group/embed-certs/serial/Stop 12.18
318 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
319 TestStartStop/group/embed-certs/serial/SecondStart 573.67
320 TestStartStop/group/newest-cni/serial/DeployApp 0
321 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.98
322 TestStartStop/group/newest-cni/serial/Stop 10.96
323 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
324 TestStartStop/group/newest-cni/serial/SecondStart 20.15
325 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
326 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
327 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.4
328 TestStartStop/group/newest-cni/serial/Pause 3.12
329 TestStartStop/group/default-k8s-different-port/serial/DeployApp 10.45
330 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.61
331 TestStartStop/group/default-k8s-different-port/serial/Stop 10.73
332 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.2
333 TestStartStop/group/default-k8s-different-port/serial/SecondStart 571.24
334 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.01
335 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.19
336 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.37
337 TestStartStop/group/no-preload/serial/Pause 3.11
338 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
339 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.18
340 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.37
341 TestStartStop/group/embed-certs/serial/Pause 3.08
342 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 5.01
343 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.2
344 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.37
345 TestStartStop/group/default-k8s-different-port/serial/Pause 2.95
x
+
TestDownloadOnly/v1.16.0/json-events (5.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:73: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220221082507-6550 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:73: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220221082507-6550 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.519928772s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (5.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:175: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220221082507-6550
aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220221082507-6550: exit status 85 (80.574565ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/02/21 08:25:07
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.17.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0221 08:25:07.341812    6562 out.go:297] Setting OutFile to fd 1 ...
	I0221 08:25:07.341888    6562 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0221 08:25:07.341892    6562 out.go:310] Setting ErrFile to fd 2...
	I0221 08:25:07.341896    6562 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0221 08:25:07.341985    6562 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/bin
	W0221 08:25:07.342094    6562 root.go:293] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/config/config.json: no such file or directory
	I0221 08:25:07.342352    6562 out.go:304] Setting JSON to true
	I0221 08:25:07.343186    6562 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":462,"bootTime":1645431446,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0221 08:25:07.343264    6562 start.go:122] virtualization: kvm guest
	I0221 08:25:07.346130    6562 notify.go:193] Checking for updates...
	W0221 08:25:07.346151    6562 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball: no such file or directory
	I0221 08:25:07.348036    6562 driver.go:344] Setting default libvirt URI to qemu:///system
	I0221 08:25:07.384319    6562 docker.go:132] docker version: linux-20.10.12
	I0221 08:25:07.384427    6562 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0221 08:25:07.772826    6562 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:33 SystemTime:2022-02-21 08:25:07.412517232 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0221 08:25:07.772952    6562 docker.go:237] overlay module found
	I0221 08:25:07.774965    6562 start.go:281] selected driver: docker
	I0221 08:25:07.774978    6562 start.go:798] validating driver "docker" against <nil>
	I0221 08:25:07.775157    6562 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0221 08:25:07.861920    6562 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:33 SystemTime:2022-02-21 08:25:07.801093202 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0221 08:25:07.862063    6562 start_flags.go:288] no existing cluster config was found, will generate one from the flags 
	I0221 08:25:07.862590    6562 start_flags.go:369] Using suggested 8000MB memory alloc based on sys=32104MB, container=32104MB
	I0221 08:25:07.862697    6562 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	I0221 08:25:07.862717    6562 start_flags.go:813] Wait components to verify : map[apiserver:true system_pods:true]
	I0221 08:25:07.862737    6562 cni.go:93] Creating CNI manager for ""
	I0221 08:25:07.862745    6562 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0221 08:25:07.862759    6562 start_flags.go:302] config:
	{Name:download-only-20220221082507-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220221082507-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0221 08:25:07.864969    6562 cache.go:120] Beginning downloading kic base image for docker with docker
	I0221 08:25:07.866438    6562 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0221 08:25:07.866557    6562 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon
	I0221 08:25:07.906523    6562 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon, skipping pull
	I0221 08:25:07.906554    6562 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 exists in daemon, skipping load
	I0221 08:25:08.056520    6562 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.16.0/preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0221 08:25:08.056553    6562 cache.go:57] Caching tarball of preloaded images
	I0221 08:25:08.056810    6562 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0221 08:25:08.059171    6562 preload.go:238] getting checksum for preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0221 08:25:08.251350    6562 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.16.0/preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:0c23f68e9d9de4489f09a530426fd1e3 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0221 08:25:10.812546    6562 preload.go:249] saving checksum for preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0221 08:25:10.812636    6562 preload.go:256] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0221 08:25:11.727380    6562 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0221 08:25:11.727669    6562 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/download-only-20220221082507-6550/config.json ...
	I0221 08:25:11.727699    6562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/download-only-20220221082507-6550/config.json: {Name:mkeee4e3cacb9472f15dbfb8f01d43ade0c1140b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0221 08:25:11.727870    6562 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0221 08:25:11.728047    6562 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220221082507-6550"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:176: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.4/json-events (9.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.4/json-events
aaa_download_only_test.go:73: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220221082507-6550 --force --alsologtostderr --kubernetes-version=v1.23.4 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:73: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220221082507-6550 --force --alsologtostderr --kubernetes-version=v1.23.4 --container-runtime=docker --driver=docker  --container-runtime=docker: (9.460512296s)
--- PASS: TestDownloadOnly/v1.23.4/json-events (9.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.4/preload-exists
--- PASS: TestDownloadOnly/v1.23.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.4/LogsDuration
aaa_download_only_test.go:175: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220221082507-6550
aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220221082507-6550: exit status 85 (70.609988ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/02/21 08:25:12
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.17.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0221 08:25:12.949901    6710 out.go:297] Setting OutFile to fd 1 ...
	I0221 08:25:12.949982    6710 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0221 08:25:12.949986    6710 out.go:310] Setting ErrFile to fd 2...
	I0221 08:25:12.949991    6710 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0221 08:25:12.950094    6710 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/bin
	W0221 08:25:12.950200    6710 root.go:293] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/config/config.json: no such file or directory
	I0221 08:25:12.950307    6710 out.go:304] Setting JSON to true
	I0221 08:25:12.951089    6710 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":467,"bootTime":1645431446,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0221 08:25:12.951164    6710 start.go:122] virtualization: kvm guest
	I0221 08:25:12.953891    6710 notify.go:193] Checking for updates...
	I0221 08:25:12.956357    6710 config.go:176] Loaded profile config "download-only-20220221082507-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0221 08:25:12.956407    6710 start.go:706] api.Load failed for download-only-20220221082507-6550: filestore "download-only-20220221082507-6550": Docker machine "download-only-20220221082507-6550" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0221 08:25:12.956451    6710 driver.go:344] Setting default libvirt URI to qemu:///system
	W0221 08:25:12.956481    6710 start.go:706] api.Load failed for download-only-20220221082507-6550: filestore "download-only-20220221082507-6550": Docker machine "download-only-20220221082507-6550" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0221 08:25:12.993689    6710 docker.go:132] docker version: linux-20.10.12
	I0221 08:25:12.993805    6710 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0221 08:25:13.083467    6710 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:33 SystemTime:2022-02-21 08:25:13.022104163 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0221 08:25:13.083575    6710 docker.go:237] overlay module found
	I0221 08:25:13.085798    6710 start.go:281] selected driver: docker
	I0221 08:25:13.085820    6710 start.go:798] validating driver "docker" against &{Name:download-only-20220221082507-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220221082507-6550 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0221 08:25:13.086071    6710 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0221 08:25:13.174487    6710 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:33 SystemTime:2022-02-21 08:25:13.11529272 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0221 08:25:13.175102    6710 cni.go:93] Creating CNI manager for ""
	I0221 08:25:13.175117    6710 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0221 08:25:13.175127    6710 start_flags.go:302] config:
	{Name:download-only-20220221082507-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:download-only-20220221082507-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0221 08:25:13.177240    6710 cache.go:120] Beginning downloading kic base image for docker with docker
	I0221 08:25:13.178787    6710 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker
	I0221 08:25:13.178899    6710 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon
	I0221 08:25:13.220445    6710 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon, skipping pull
	I0221 08:25:13.220475    6710 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 exists in daemon, skipping load
	I0221 08:25:13.366694    6710 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.4/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4
	I0221 08:25:13.366730    6710 cache.go:57] Caching tarball of preloaded images
	I0221 08:25:13.367078    6710 preload.go:132] Checking if preload exists for k8s version v1.23.4 and runtime docker
	I0221 08:25:13.369423    6710 preload.go:238] getting checksum for preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4 ...
	I0221 08:25:13.559042    6710 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.4/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4?checksum=md5:a60a5fe29a46acf7752603452100b8a6 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220221082507-6550"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:176: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5-rc.0/json-events (17.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5-rc.0/json-events
aaa_download_only_test.go:73: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220221082507-6550 --force --alsologtostderr --kubernetes-version=v1.23.5-rc.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:73: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220221082507-6550 --force --alsologtostderr --kubernetes-version=v1.23.5-rc.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (17.365444552s)
--- PASS: TestDownloadOnly/v1.23.5-rc.0/json-events (17.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5-rc.0/binaries
--- PASS: TestDownloadOnly/v1.23.5-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5-rc.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5-rc.0/LogsDuration
aaa_download_only_test.go:175: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220221082507-6550
aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220221082507-6550: exit status 85 (74.306384ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/02/21 08:25:22
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.17.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0221 08:25:22.476308    6856 out.go:297] Setting OutFile to fd 1 ...
	I0221 08:25:22.476399    6856 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0221 08:25:22.476410    6856 out.go:310] Setting ErrFile to fd 2...
	I0221 08:25:22.476413    6856 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0221 08:25:22.476508    6856 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/bin
	W0221 08:25:22.476615    6856 root.go:293] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/config/config.json: no such file or directory
	I0221 08:25:22.476716    6856 out.go:304] Setting JSON to true
	I0221 08:25:22.477427    6856 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":477,"bootTime":1645431446,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0221 08:25:22.477490    6856 start.go:122] virtualization: kvm guest
	I0221 08:25:22.480003    6856 notify.go:193] Checking for updates...
	I0221 08:25:22.482145    6856 config.go:176] Loaded profile config "download-only-20220221082507-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4
	W0221 08:25:22.482197    6856 start.go:706] api.Load failed for download-only-20220221082507-6550: filestore "download-only-20220221082507-6550": Docker machine "download-only-20220221082507-6550" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0221 08:25:22.482234    6856 driver.go:344] Setting default libvirt URI to qemu:///system
	W0221 08:25:22.482256    6856 start.go:706] api.Load failed for download-only-20220221082507-6550: filestore "download-only-20220221082507-6550": Docker machine "download-only-20220221082507-6550" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0221 08:25:22.517228    6856 docker.go:132] docker version: linux-20.10.12
	I0221 08:25:22.517337    6856 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0221 08:25:22.602412    6856 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:33 SystemTime:2022-02-21 08:25:22.543231344 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0221 08:25:22.602537    6856 docker.go:237] overlay module found
	I0221 08:25:22.604631    6856 start.go:281] selected driver: docker
	I0221 08:25:22.604643    6856 start.go:798] validating driver "docker" against &{Name:download-only-20220221082507-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:download-only-20220221082507-6550 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0221 08:25:22.604864    6856 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0221 08:25:22.688087    6856 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:33 SystemTime:2022-02-21 08:25:22.630804808 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0221 08:25:22.688626    6856 cni.go:93] Creating CNI manager for ""
	I0221 08:25:22.688642    6856 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0221 08:25:22.688650    6856 start_flags.go:302] config:
	{Name:download-only-20220221082507-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5-rc.0 ClusterName:download-only-20220221082507-6550 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0221 08:25:22.690960    6856 cache.go:120] Beginning downloading kic base image for docker with docker
	I0221 08:25:22.692557    6856 preload.go:132] Checking if preload exists for k8s version v1.23.5-rc.0 and runtime docker
	I0221 08:25:22.692676    6856 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon
	I0221 08:25:22.736597    6856 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon, skipping pull
	I0221 08:25:22.736621    6856 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 exists in daemon, skipping load
	W0221 08:25:22.837637    6856 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.5-rc.0/preloaded-images-k8s-v17-v1.23.5-rc.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I0221 08:25:22.837772    6856 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/download-only-20220221082507-6550/config.json ...
	I0221 08:25:22.837899    6856 cache.go:107] acquiring lock: {Name:mkae39637d54454769ea96c0928557495a2624a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0221 08:25:22.837906    6856 cache.go:107] acquiring lock: {Name:mk048af2cde148e8a512f7653817cea4bb1a47e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0221 08:25:22.837921    6856 cache.go:107] acquiring lock: {Name:mk4db3a52d1f4fba9dc9223f3164cb8742f00f2f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0221 08:25:22.838009    6856 preload.go:132] Checking if preload exists for k8s version v1.23.5-rc.0 and runtime docker
	I0221 08:25:22.838046    6856 cache.go:107] acquiring lock: {Name:mk8eae83c87e69d4f61d57feebab23b9c618f6ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0221 08:25:22.838047    6856 cache.go:107] acquiring lock: {Name:mkf4838fe0f0754a09f1960b33e83e9fd73716a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0221 08:25:22.838082    6856 cache.go:107] acquiring lock: {Name:mk9f52e4209628388c7268565716f70b6a94e740 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0221 08:25:22.838097    6856 cache.go:107] acquiring lock: {Name:mkc848fd9c1e80ffd1414dd8603c19c641b3fcb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0221 08:25:22.838141    6856 cache.go:107] acquiring lock: {Name:mkd0cd2ae3afc8e39e716bbcd5f1e196bdbc0e1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0221 08:25:22.838151    6856 cache.go:107] acquiring lock: {Name:mk8cb7540d8a1bd7faccdcc974630f93843749a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0221 08:25:22.838002    6856 cache.go:107] acquiring lock: {Name:mk0340c3f1bf4216c7deeea4078501a3da4b3533 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0221 08:25:22.838335    6856 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.23.5-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.5-rc.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/linux/amd64/v1.23.5-rc.0/kubeadm
	I0221 08:25:22.838332    6856 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.23.5-rc.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.5-rc.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/linux/amd64/v1.23.5-rc.0/kubelet
	I0221 08:25:22.838375    6856 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.23.5-rc.0
	I0221 08:25:22.838413    6856 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.23.5-rc.0
	I0221 08:25:22.838460    6856 image.go:134] retrieving image: k8s.gcr.io/pause:3.6
	I0221 08:25:22.838478    6856 image.go:134] retrieving image: docker.io/kubernetesui/dashboard:v2.3.1
	I0221 08:25:22.838485    6856 image.go:134] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.7
	I0221 08:25:22.838381    6856 image.go:134] retrieving image: k8s.gcr.io/etcd:3.5.1-0
	I0221 08:25:22.838602    6856 image.go:134] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.6
	I0221 08:25:22.838706    6856 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.23.5-rc.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.5-rc.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/linux/amd64/v1.23.5-rc.0/kubectl
	I0221 08:25:22.838743    6856 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.23.5-rc.0
	I0221 08:25:22.838805    6856 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.23.5-rc.0
	I0221 08:25:22.838913    6856 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0221 08:25:22.839659    6856 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.23.5-rc.0: Error response from daemon: reference does not exist
	I0221 08:25:22.839681    6856 image.go:180] daemon lookup for docker.io/kubernetesui/metrics-scraper:v1.0.7: Error response from daemon: reference does not exist
	I0221 08:25:22.839698    6856 image.go:180] daemon lookup for docker.io/kubernetesui/dashboard:v2.3.1: Error response from daemon: reference does not exist
	I0221 08:25:22.839738    6856 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.23.5-rc.0: Error response from daemon: reference does not exist
	I0221 08:25:22.839935    6856 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.23.5-rc.0: Error response from daemon: reference does not exist
	I0221 08:25:22.840166    6856 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.23.5-rc.0: Error response from daemon: reference does not exist
	I0221 08:25:22.853290    6856 image.go:176] found k8s.gcr.io/pause:3.6 locally: &{UncompressedImageCore:0xc000010348 lock:{state:0 sema:0} manifest:<nil>}
	I0221 08:25:22.853328    6856 cache.go:161] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6
	I0221 08:25:22.896570    6856 cache.go:156] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 exists
	I0221 08:25:22.896616    6856 cache.go:96] cache image "k8s.gcr.io/pause:3.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6" took 58.707968ms
	I0221 08:25:22.896631    6856 cache.go:80] save to tar file k8s.gcr.io/pause:3.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 succeeded
	I0221 08:25:23.142313    6856 image.go:176] found gcr.io/k8s-minikube/storage-provisioner:v5 locally: &{UncompressedImageCore:0xc0000102f8 lock:{state:0 sema:0} manifest:<nil>}
	I0221 08:25:23.142361    6856 cache.go:161] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0221 08:25:23.277736    6856 image.go:176] found k8s.gcr.io/coredns/coredns:v1.8.6 locally: &{UncompressedImageCore:0xc0007262a8 lock:{state:0 sema:0} manifest:<nil>}
	I0221 08:25:23.277771    6856 cache.go:161] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6
	I0221 08:25:24.447266    6856 cache.go:161] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.5-rc.0
	I0221 08:25:24.512209    6856 cache.go:161] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.5-rc.0
	I0221 08:25:24.519076    6856 cache.go:161] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.5-rc.0
	I0221 08:25:24.669999    6856 cache.go:161] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.5-rc.0
	I0221 08:25:24.813579    6856 cache.go:156] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0221 08:25:24.813629    6856 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.975479932s
	I0221 08:25:24.813652    6856 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0221 08:25:25.075060    6856 cache.go:161] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7
	I0221 08:25:25.176535    6856 cache.go:161] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1
	I0221 08:25:25.474083    6856 cache.go:156] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7 exists
	I0221 08:25:25.474137    6856 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7" took 2.636245671s
	I0221 08:25:25.474154    6856 cache.go:80] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.7 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7 succeeded
	I0221 08:25:25.538233    6856 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.23.5-rc.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.5-rc.0/bin/linux/amd64/kubectl.sha256
	I0221 08:25:25.803448    6856 cache.go:156] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 exists
	I0221 08:25:25.803500    6856 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6" took 2.965515438s
	I0221 08:25:25.803512    6856 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 succeeded
	I0221 08:25:25.880437    6856 cache.go:156] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1 exists
	I0221 08:25:25.880487    6856 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1" took 3.042406668s
	I0221 08:25:25.880505    6856 cache.go:80] save to tar file docker.io/kubernetesui/dashboard:v2.3.1 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1 succeeded
	I0221 08:25:25.921393    6856 image.go:176] found k8s.gcr.io/etcd:3.5.1-0 locally: &{UncompressedImageCore:0xc0001140c0 lock:{state:0 sema:0} manifest:<nil>}
	I0221 08:25:25.921442    6856 cache.go:161] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0
	I0221 08:25:26.958552    6856 cache.go:156] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.5-rc.0 exists
	I0221 08:25:26.958604    6856 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.23.5-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.5-rc.0" took 4.120568852s
	I0221 08:25:26.958622    6856 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.23.5-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.5-rc.0 succeeded
	I0221 08:25:27.302477    6856 cache.go:156] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.5-rc.0 exists
	I0221 08:25:27.302518    6856 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.23.5-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.5-rc.0" took 4.464519367s
	I0221 08:25:27.302529    6856 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.23.5-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.5-rc.0 succeeded
	I0221 08:25:27.361188    6856 cache.go:156] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.5-rc.0 exists
	I0221 08:25:27.361247    6856 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.23.5-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.5-rc.0" took 4.523216477s
	I0221 08:25:27.361264    6856 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.23.5-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.5-rc.0 succeeded
	I0221 08:25:27.846070    6856 cache.go:156] /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.5-rc.0 exists
	I0221 08:25:27.846126    6856 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.23.5-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.5-rc.0" took 5.008253061s
	I0221 08:25:27.846144    6856 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.23.5-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.5-rc.0 succeeded
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220221082507-6550"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:176: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.5-rc.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.33s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:193: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.33s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:205: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20220221082507-6550
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnlyKic (27.45s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:230: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-20220221082540-6550 --force --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:230: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-20220221082540-6550 --force --alsologtostderr --driver=docker  --container-runtime=docker: (26.139289452s)
helpers_test.go:176: Cleaning up "download-docker-20220221082540-6550" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-20220221082540-6550
--- PASS: TestDownloadOnlyKic (27.45s)

                                                
                                    
x
+
TestBinaryMirror (0.86s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:316: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-20220221082608-6550 --alsologtostderr --binary-mirror http://127.0.0.1:46005 --driver=docker  --container-runtime=docker
helpers_test.go:176: Cleaning up "binary-mirror-20220221082608-6550" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-20220221082608-6550
--- PASS: TestBinaryMirror (0.86s)

                                                
                                    
x
+
TestOffline (65.99s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-20220221084933-6550 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-20220221084933-6550 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m3.174570623s)
helpers_test.go:176: Cleaning up "offline-docker-20220221084933-6550" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-20220221084933-6550
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-20220221084933-6550: (2.814225484s)
--- PASS: TestOffline (65.99s)

                                                
                                    
x
+
TestAddons/Setup (139.98s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-linux-amd64 start -p addons-20220221082609-6550 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-linux-amd64 start -p addons-20220221082609-6550 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m19.980124241s)
--- PASS: TestAddons/Setup (139.98s)

                                                
                                    
x
+
TestAddons/parallel/Registry (25.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:281: registry stabilized in 16.748295ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-56gtt" [4b08259b-50fc-4dc8-bc8b-6149e221c3b0] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.027979176s
addons_test.go:286: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-proxy-pfpv4" [f3d3b5d3-2b5b-4921-8973-afd1444b4bc1] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008285011s
addons_test.go:291: (dbg) Run:  kubectl --context addons-20220221082609-6550 delete po -l run=registry-test --now
addons_test.go:296: (dbg) Run:  kubectl --context addons-20220221082609-6550 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:296: (dbg) Done: kubectl --context addons-20220221082609-6550 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (14.24052981s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220221082609-6550 ip
2022/02/21 08:28:53 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:339: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220221082609-6550 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (25.08s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (29.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:163: (dbg) Run:  kubectl --context addons-20220221082609-6550 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Run:  kubectl --context addons-20220221082609-6550 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:196: (dbg) Run:  kubectl --context addons-20220221082609-6550 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:201: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [19f0ad74-9545-41a0-91ac-ada04e4b7059] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:343: "nginx" [19f0ad74-9545-41a0-91ac-ada04e4b7059] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:201: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 19.006461357s
addons_test.go:213: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220221082609-6550 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:237: (dbg) Run:  kubectl --context addons-20220221082609-6550 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220221082609-6550 ip
addons_test.go:248: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220221082609-6550 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:257: (dbg) Done: out/minikube-linux-amd64 -p addons-20220221082609-6550 addons disable ingress-dns --alsologtostderr -v=1: (1.406711293s)
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220221082609-6550 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:262: (dbg) Done: out/minikube-linux-amd64 -p addons-20220221082609-6550 addons disable ingress --alsologtostderr -v=1: (7.538188737s)
--- PASS: TestAddons/parallel/Ingress (29.90s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.67s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:358: metrics-server stabilized in 16.198388ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:360: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:343: "metrics-server-6b76bd68b6-4kxhz" [b0343546-e7d8-45c2-b499-0687d6368039] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:360: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.030445691s
addons_test.go:366: (dbg) Run:  kubectl --context addons-20220221082609-6550 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220221082609-6550 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.67s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (16.58s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:407: tiller-deploy stabilized in 16.102604ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:409: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:343: "tiller-deploy-6d67d5465d-4rdms" [ff60f50a-a3f2-4595-8a20-4bec721efbda] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:409: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.031495757s
addons_test.go:424: (dbg) Run:  kubectl --context addons-20220221082609-6550 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:424: (dbg) Done: kubectl --context addons-20220221082609-6550 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (11.205122066s)
addons_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220221082609-6550 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (16.58s)

                                                
                                    
x
+
TestAddons/parallel/CSI (42.88s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:512: csi-hostpath-driver pods stabilized in 47.495762ms
addons_test.go:515: (dbg) Run:  kubectl --context addons-20220221082609-6550 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:515: (dbg) Done: kubectl --context addons-20220221082609-6550 create -f testdata/csi-hostpath-driver/pvc.yaml: (1.069500662s)
addons_test.go:520: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20220221082609-6550 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:525: (dbg) Run:  kubectl --context addons-20220221082609-6550 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:530: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:343: "task-pv-pod" [2ef4973c-40c1-4215-902e-2748e4ff2d8d] Pending
helpers_test.go:343: "task-pv-pod" [2ef4973c-40c1-4215-902e-2748e4ff2d8d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [2ef4973c-40c1-4215-902e-2748e4ff2d8d] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:530: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 17.005763864s
addons_test.go:535: (dbg) Run:  kubectl --context addons-20220221082609-6550 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:540: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20220221082609-6550 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:426: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20220221082609-6550 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:545: (dbg) Run:  kubectl --context addons-20220221082609-6550 delete pod task-pv-pod
addons_test.go:551: (dbg) Run:  kubectl --context addons-20220221082609-6550 delete pvc hpvc
addons_test.go:557: (dbg) Run:  kubectl --context addons-20220221082609-6550 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:562: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20220221082609-6550 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:567: (dbg) Run:  kubectl --context addons-20220221082609-6550 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:572: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:343: "task-pv-pod-restore" [2618d2cb-500e-40c8-abc7-8750e4a9f5d7] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod-restore" [2618d2cb-500e-40c8-abc7-8750e4a9f5d7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod-restore" [2618d2cb-500e-40c8-abc7-8750e4a9f5d7] Running
addons_test.go:572: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 14.005775275s
addons_test.go:577: (dbg) Run:  kubectl --context addons-20220221082609-6550 delete pod task-pv-pod-restore
addons_test.go:581: (dbg) Run:  kubectl --context addons-20220221082609-6550 delete pvc hpvc-restore
addons_test.go:585: (dbg) Run:  kubectl --context addons-20220221082609-6550 delete volumesnapshot new-snapshot-demo
addons_test.go:589: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220221082609-6550 addons disable csi-hostpath-driver --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:589: (dbg) Done: out/minikube-linux-amd64 -p addons-20220221082609-6550 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.964678772s)
addons_test.go:593: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220221082609-6550 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (42.88s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (46.35s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:604: (dbg) Run:  kubectl --context addons-20220221082609-6550 create -f testdata/busybox.yaml
addons_test.go:610: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [f4763a2d-b9a0-49c0-bc75-d380dc7e43c3] Pending
helpers_test.go:343: "busybox" [f4763a2d-b9a0-49c0-bc75-d380dc7e43c3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [f4763a2d-b9a0-49c0-bc75-d380dc7e43c3] Running
addons_test.go:610: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 10.007454769s
addons_test.go:616: (dbg) Run:  kubectl --context addons-20220221082609-6550 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:653: (dbg) Run:  kubectl --context addons-20220221082609-6550 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:666: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220221082609-6550 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:666: (dbg) Done: out/minikube-linux-amd64 -p addons-20220221082609-6550 addons disable gcp-auth --alsologtostderr -v=1: (5.978927494s)
addons_test.go:682: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220221082609-6550 addons enable gcp-auth
addons_test.go:682: (dbg) Done: out/minikube-linux-amd64 -p addons-20220221082609-6550 addons enable gcp-auth: (2.941873728s)
addons_test.go:688: (dbg) Run:  kubectl --context addons-20220221082609-6550 apply -f testdata/private-image.yaml
addons_test.go:695: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:343: "private-image-7f8587d5b7-v5swk" [2bca746d-92d7-4f3e-9085-55213ac943c7] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])
helpers_test.go:343: "private-image-7f8587d5b7-v5swk" [2bca746d-92d7-4f3e-9085-55213ac943c7] Running
addons_test.go:695: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image healthy within 16.005459979s
addons_test.go:701: (dbg) Run:  kubectl --context addons-20220221082609-6550 apply -f testdata/private-image-eu.yaml
addons_test.go:706: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:343: "private-image-eu-869dcfd8c7-fnqrl" [9fc4ad6a-90e3-4714-8f47-5bb03276ea3d] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])
helpers_test.go:343: "private-image-eu-869dcfd8c7-fnqrl" [9fc4ad6a-90e3-4714-8f47-5bb03276ea3d] Running
addons_test.go:706: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image-eu healthy within 10.005501653s
--- PASS: TestAddons/serial/GCPAuth (46.35s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.34s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:133: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-20220221082609-6550
addons_test.go:133: (dbg) Done: out/minikube-linux-amd64 stop -p addons-20220221082609-6550: (11.143853848s)
addons_test.go:137: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-20220221082609-6550
addons_test.go:141: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-20220221082609-6550
--- PASS: TestAddons/StoppedEnableDisable (11.34s)

                                                
                                    
x
+
TestCertOptions (37.18s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20220221085121-6550 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:50: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20220221085121-6550 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (33.596057649s)
cert_options_test.go:61: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20220221085121-6550 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:89: (dbg) Run:  kubectl --context cert-options-20220221085121-6550 config view
cert_options_test.go:101: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-20220221085121-6550 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-20220221085121-6550" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20220221085121-6550
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20220221085121-6550: (2.717251105s)
--- PASS: TestCertOptions (37.18s)

                                                
                                    
x
+
TestCertExpiration (221.56s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:124: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220221085105-6550 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:124: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220221085105-6550 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (34.151583726s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220221085105-6550 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220221085105-6550 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (4.610638724s)
helpers_test.go:176: Cleaning up "cert-expiration-20220221085105-6550" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-20220221085105-6550
E0221 08:54:46.945048    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-20220221085105-6550: (2.794123983s)
--- PASS: TestCertExpiration (221.56s)

                                                
                                    
x
+
TestDockerFlags (41.63s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:46: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-20220221085039-6550 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:46: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-20220221085039-6550 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (37.904788827s)
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-20220221085039-6550 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:62: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-20220221085039-6550 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:176: Cleaning up "docker-flags-20220221085039-6550" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-20220221085039-6550
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-20220221085039-6550: (2.805789535s)
--- PASS: TestDockerFlags (41.63s)

                                                
                                    
x
+
TestForceSystemdFlag (41.42s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20220221085024-6550 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20220221085024-6550 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (38.132852763s)
docker_test.go:105: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-20220221085024-6550 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-flag-20220221085024-6550" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20220221085024-6550

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20220221085024-6550: (2.703799425s)
--- PASS: TestForceSystemdFlag (41.42s)

                                                
                                    
x
+
TestForceSystemdEnv (42.01s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:151: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20220221084942-6550 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0221 08:49:52.220676    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory
docker_test.go:151: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20220221084942-6550 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (34.486715639s)
docker_test.go:105: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-20220221084942-6550 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-env-20220221084942-6550" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20220221084942-6550
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20220221084942-6550: (7.011634254s)
--- PASS: TestForceSystemdEnv (42.01s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (8.4s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (8.40s)

                                                
                                    
x
+
TestErrorSpam/setup (26s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:79: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20220221083012-6550 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220221083012-6550 --driver=docker  --container-runtime=docker
error_spam_test.go:79: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20220221083012-6550 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220221083012-6550 --driver=docker  --container-runtime=docker: (25.996812242s)
error_spam_test.go:89: acceptable stderr: "! Your cgroup does not allow setting memory."
--- PASS: TestErrorSpam/setup (26.00s)

                                                
                                    
x
+
TestErrorSpam/start (0.89s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:214: Cleaning up 1 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220221083012-6550 --log_dir /tmp/nospam-20220221083012-6550 start --dry-run
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220221083012-6550 --log_dir /tmp/nospam-20220221083012-6550 start --dry-run
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220221083012-6550 --log_dir /tmp/nospam-20220221083012-6550 start --dry-run
--- PASS: TestErrorSpam/start (0.89s)

                                                
                                    
x
+
TestErrorSpam/status (1.14s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220221083012-6550 --log_dir /tmp/nospam-20220221083012-6550 status
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220221083012-6550 --log_dir /tmp/nospam-20220221083012-6550 status
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220221083012-6550 --log_dir /tmp/nospam-20220221083012-6550 status
--- PASS: TestErrorSpam/status (1.14s)

                                                
                                    
x
+
TestErrorSpam/pause (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220221083012-6550 --log_dir /tmp/nospam-20220221083012-6550 pause
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220221083012-6550 --log_dir /tmp/nospam-20220221083012-6550 pause
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220221083012-6550 --log_dir /tmp/nospam-20220221083012-6550 pause
--- PASS: TestErrorSpam/pause (1.47s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220221083012-6550 --log_dir /tmp/nospam-20220221083012-6550 unpause
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220221083012-6550 --log_dir /tmp/nospam-20220221083012-6550 unpause
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220221083012-6550 --log_dir /tmp/nospam-20220221083012-6550 unpause
--- PASS: TestErrorSpam/unpause (1.59s)

                                                
                                    
x
+
TestErrorSpam/stop (10.96s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220221083012-6550 --log_dir /tmp/nospam-20220221083012-6550 stop
error_spam_test.go:157: (dbg) Done: out/minikube-linux-amd64 -p nospam-20220221083012-6550 --log_dir /tmp/nospam-20220221083012-6550 stop: (10.697195648s)
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220221083012-6550 --log_dir /tmp/nospam-20220221083012-6550 stop
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220221083012-6550 --log_dir /tmp/nospam-20220221083012-6550 stop
--- PASS: TestErrorSpam/stop (10.96s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1722: local sync path: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/files/etc/test/nested/copy/6550/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (42.73s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2104: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220221083056-6550 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2104: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220221083056-6550 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (42.728392204s)
--- PASS: TestFunctional/serial/StartWithProxy (42.73s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.6s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220221083056-6550 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220221083056-6550 --alsologtostderr -v=8: (5.59696837s)
functional_test.go:659: soft start took 5.597535193s for "functional-20220221083056-6550" cluster.
--- PASS: TestFunctional/serial/SoftStart (5.60s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-20220221083056-6550 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (8.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1050: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 cache add k8s.gcr.io/pause:3.1
functional_test.go:1050: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 cache add k8s.gcr.io/pause:3.3
functional_test.go:1050: (dbg) Done: out/minikube-linux-amd64 -p functional-20220221083056-6550 cache add k8s.gcr.io/pause:3.3: (4.091155504s)
functional_test.go:1050: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 cache add k8s.gcr.io/pause:latest
functional_test.go:1050: (dbg) Done: out/minikube-linux-amd64 -p functional-20220221083056-6550 cache add k8s.gcr.io/pause:latest: (3.668109034s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (8.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1081: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220221083056-6550 /tmp/functional-20220221083056-65501516765280
functional_test.go:1093: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 cache add minikube-local-cache-test:functional-20220221083056-6550
functional_test.go:1093: (dbg) Done: out/minikube-linux-amd64 -p functional-20220221083056-6550 cache add minikube-local-cache-test:functional-20220221083056-6550: (2.358855828s)
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 cache delete minikube-local-cache-test:functional-20220221083056-6550
functional_test.go:1087: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220221083056-6550
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1114: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1128: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1151: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1157: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1157: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (355.914503ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 cache reload
functional_test.go:1162: (dbg) Done: out/minikube-linux-amd64 -p functional-20220221083056-6550 cache reload: (1.716988871s)
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1176: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1176: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 kubectl -- --context functional-20220221083056-6550 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-20220221083056-6550 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (28.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220221083056-6550 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220221083056-6550 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (28.101472357s)
functional_test.go:757: restart took 28.101584553s for "functional-20220221083056-6550" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (28.10s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:811: (dbg) Run:  kubectl --context functional-20220221083056-6550 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:826: etcd phase: Running
functional_test.go:836: etcd status: Ready
functional_test.go:826: kube-apiserver phase: Running
functional_test.go:836: kube-apiserver status: Ready
functional_test.go:826: kube-controller-manager phase: Running
functional_test.go:836: kube-controller-manager status: Ready
functional_test.go:826: kube-scheduler phase: Running
functional_test.go:836: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1240: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 logs
functional_test.go:1240: (dbg) Done: out/minikube-linux-amd64 -p functional-20220221083056-6550 logs: (1.29824247s)
--- PASS: TestFunctional/serial/LogsCmd (1.30s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1257: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 logs --file /tmp/functional-20220221083056-65504180110779/logs.txt
functional_test.go:1257: (dbg) Done: out/minikube-linux-amd64 -p functional-20220221083056-6550 logs --file /tmp/functional-20220221083056-65504180110779/logs.txt: (1.284081106s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1203: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1203: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1203: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220221083056-6550 config get cpus: exit status 14 (67.20848ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1203: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 config set cpus 2
functional_test.go:1203: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 config get cpus
functional_test.go:1203: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 config unset cpus
functional_test.go:1203: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1203: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220221083056-6550 config get cpus: exit status 14 (70.936264ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (3.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:906: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220221083056-6550 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:911: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220221083056-6550 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to kill pid 44917: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (3.69s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:975: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220221083056-6550 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220221083056-6550 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (210.186354ms)

                                                
                                                
-- stdout --
	* [functional-20220221083056-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13641
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0221 08:33:03.104412   44475 out.go:297] Setting OutFile to fd 1 ...
	I0221 08:33:03.104519   44475 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0221 08:33:03.104539   44475 out.go:310] Setting ErrFile to fd 2...
	I0221 08:33:03.104548   44475 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0221 08:33:03.104689   44475 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/bin
	I0221 08:33:03.104964   44475 out.go:304] Setting JSON to false
	I0221 08:33:03.106501   44475 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":937,"bootTime":1645431446,"procs":499,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0221 08:33:03.106707   44475 start.go:122] virtualization: kvm guest
	I0221 08:33:03.109623   44475 out.go:176] * [functional-20220221083056-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	I0221 08:33:03.111155   44475 out.go:176]   - MINIKUBE_LOCATION=13641
	I0221 08:33:03.112596   44475 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0221 08:33:03.114079   44475 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig
	I0221 08:33:03.115505   44475 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube
	I0221 08:33:03.116811   44475 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0221 08:33:03.117265   44475 config.go:176] Loaded profile config "functional-20220221083056-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4
	I0221 08:33:03.117638   44475 driver.go:344] Setting default libvirt URI to qemu:///system
	I0221 08:33:03.156349   44475 docker.go:132] docker version: linux-20.10.12
	I0221 08:33:03.156447   44475 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0221 08:33:03.244084   44475 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:68 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2022-02-21 08:33:03.185348421 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0221 08:33:03.244186   44475 docker.go:237] overlay module found
	I0221 08:33:03.247262   44475 out.go:176] * Using the docker driver based on existing profile
	I0221 08:33:03.247286   44475 start.go:281] selected driver: docker
	I0221 08:33:03.247291   44475 start.go:798] validating driver "docker" against &{Name:functional-20220221083056-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:functional-20220221083056-6550 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0221 08:33:03.247390   44475 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0221 08:33:03.247424   44475 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0221 08:33:03.247442   44475 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0221 08:33:03.249584   44475 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0221 08:33:03.251447   44475 out.go:176] 
	W0221 08:33:03.251527   44475 out.go:241] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0221 08:33:03.252895   44475 out.go:176] 

                                                
                                                
** /stderr **
functional_test.go:992: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220221083056-6550 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1021: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220221083056-6550 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1021: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220221083056-6550 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (217.050144ms)

                                                
                                                
-- stdout --
	* [functional-20220221083056-6550] minikube v1.25.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13641
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Utilisation du pilote docker basé sur le profil existant
	  - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0221 08:32:59.703556   43256 out.go:297] Setting OutFile to fd 1 ...
	I0221 08:32:59.703633   43256 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0221 08:32:59.703648   43256 out.go:310] Setting ErrFile to fd 2...
	I0221 08:32:59.703654   43256 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0221 08:32:59.703812   43256 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/bin
	I0221 08:32:59.704055   43256 out.go:304] Setting JSON to false
	I0221 08:32:59.705399   43256 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":934,"bootTime":1645431446,"procs":484,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0221 08:32:59.705474   43256 start.go:122] virtualization: kvm guest
	I0221 08:32:59.709172   43256 out.go:176] * [functional-20220221083056-6550] minikube v1.25.1 sur Ubuntu 20.04 (kvm/amd64)
	I0221 08:32:59.710616   43256 out.go:176]   - MINIKUBE_LOCATION=13641
	I0221 08:32:59.712103   43256 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0221 08:32:59.713633   43256 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig
	I0221 08:32:59.715091   43256 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube
	I0221 08:32:59.716393   43256 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0221 08:32:59.716881   43256 config.go:176] Loaded profile config "functional-20220221083056-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4
	I0221 08:32:59.717278   43256 driver.go:344] Setting default libvirt URI to qemu:///system
	I0221 08:32:59.758927   43256 docker.go:132] docker version: linux-20.10.12
	I0221 08:32:59.759019   43256 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0221 08:32:59.849011   43256 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:68 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2022-02-21 08:32:59.787874044 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0221 08:32:59.849193   43256 docker.go:237] overlay module found
	I0221 08:32:59.852683   43256 out.go:176] * Utilisation du pilote docker basé sur le profil existant
	I0221 08:32:59.852710   43256 start.go:281] selected driver: docker
	I0221 08:32:59.852717   43256 start.go:798] validating driver "docker" against &{Name:functional-20220221083056-6550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4 ClusterName:functional-20220221083056-6550 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0221 08:32:59.852842   43256 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0221 08:32:59.852887   43256 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0221 08:32:59.852915   43256 out.go:241] ! Votre groupe de contrôle ne permet pas de définir la mémoire.
	! Votre groupe de contrôle ne permet pas de définir la mémoire.
	I0221 08:32:59.855562   43256 out.go:176]   - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0221 08:32:59.857565   43256 out.go:176] 
	W0221 08:32:59.857693   43256 out.go:241] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0221 08:32:59.859175   43256 out.go:176] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:855: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:861: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:873: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (24.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1439: (dbg) Run:  kubectl --context functional-20220221083056-6550 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-20220221083056-6550 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:343: "hello-node-54fbb85-blt7n" [3c8202a0-8fc0-4c7f-98e5-c78ab64afc47] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:343: "hello-node-54fbb85-blt7n" [3c8202a0-8fc0-4c7f-98e5-c78ab64afc47] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 22.024660159s
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1468: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 service --namespace=default --https --url hello-node
functional_test.go:1484: found endpoint: https://192.168.49.2:30450
functional_test.go:1495: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 service hello-node --url --format={{.IP}}
functional_test.go:1504: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 service hello-node --url
functional_test.go:1510: found endpoint for hello-node: http://192.168.49.2:30450
functional_test.go:1521: Attempting to fetch http://192.168.49.2:30450 ...
functional_test.go:1541: http://192.168.49.2:30450: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-54fbb85-blt7n

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=172.17.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30450
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmd (24.18s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1556: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 addons list
functional_test.go:1568: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:343: "storage-provisioner" [7d6bc60d-337e-47c1-9813-9053e6331422] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.013160786s

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) Run:  kubectl --context functional-20220221083056-6550 get storageclass -o=json
functional_test_pvc_test.go:70: (dbg) Run:  kubectl --context functional-20220221083056-6550 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20220221083056-6550 get pvc myclaim -o=json
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20220221083056-6550 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [29b617b5-acf8-4e56-8e39-7968cf045069] Pending
helpers_test.go:343: "sp-pod" [29b617b5-acf8-4e56-8e39-7968cf045069] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [29b617b5-acf8-4e56-8e39-7968cf045069] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 29.006507897s
functional_test_pvc_test.go:101: (dbg) Run:  kubectl --context functional-20220221083056-6550 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:107: (dbg) Run:  kubectl --context functional-20220221083056-6550 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:107: (dbg) Done: kubectl --context functional-20220221083056-6550 delete -f testdata/storage-provisioner/pod.yaml: (1.800687718s)
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20220221083056-6550 apply -f testdata/storage-provisioner/pod.yaml
2022/02/21 08:33:07 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [61156257-a4be-4dbe-9ab3-83435cdbf3ba] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [61156257-a4be-4dbe-9ab3-83435cdbf3ba] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [61156257-a4be-4dbe-9ab3-83435cdbf3ba] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.006498688s
functional_test_pvc_test.go:115: (dbg) Run:  kubectl --context functional-20220221083056-6550 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.73s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1591: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1608: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh -n functional-20220221083056-6550 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 cp functional-20220221083056-6550:/home/docker/cp-test.txt /tmp/mk_test3108556767/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh -n functional-20220221083056-6550 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1660: (dbg) Run:  kubectl --context functional-20220221083056-6550 replace --force -f testdata/mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1666: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-b87c45988-59kjd" [1787aad8-0cdf-47b7-9708-ae1a4f17fb25] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-b87c45988-59kjd" [1787aad8-0cdf-47b7-9708-ae1a4f17fb25] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1666: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.007237507s
functional_test.go:1674: (dbg) Run:  kubectl --context functional-20220221083056-6550 exec mysql-b87c45988-59kjd -- mysql -ppassword -e "show databases;"
functional_test.go:1674: (dbg) Non-zero exit: kubectl --context functional-20220221083056-6550 exec mysql-b87c45988-59kjd -- mysql -ppassword -e "show databases;": exit status 1 (144.319167ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1674: (dbg) Run:  kubectl --context functional-20220221083056-6550 exec mysql-b87c45988-59kjd -- mysql -ppassword -e "show databases;"
functional_test.go:1674: (dbg) Non-zero exit: kubectl --context functional-20220221083056-6550 exec mysql-b87c45988-59kjd -- mysql -ppassword -e "show databases;": exit status 1 (286.921636ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1674: (dbg) Run:  kubectl --context functional-20220221083056-6550 exec mysql-b87c45988-59kjd -- mysql -ppassword -e "show databases;"
functional_test.go:1674: (dbg) Non-zero exit: kubectl --context functional-20220221083056-6550 exec mysql-b87c45988-59kjd -- mysql -ppassword -e "show databases;": exit status 1 (508.566811ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1674: (dbg) Run:  kubectl --context functional-20220221083056-6550 exec mysql-b87c45988-59kjd -- mysql -ppassword -e "show databases;"
functional_test.go:1674: (dbg) Non-zero exit: kubectl --context functional-20220221083056-6550 exec mysql-b87c45988-59kjd -- mysql -ppassword -e "show databases;": exit status 1 (227.296646ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1674: (dbg) Run:  kubectl --context functional-20220221083056-6550 exec mysql-b87c45988-59kjd -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.81s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1796: Checking for existence of /etc/test/nested/copy/6550/hosts within VM
functional_test.go:1798: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "sudo cat /etc/test/nested/copy/6550/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1803: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1839: Checking for existence of /etc/ssl/certs/6550.pem within VM
functional_test.go:1840: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "sudo cat /etc/ssl/certs/6550.pem"
functional_test.go:1839: Checking for existence of /usr/share/ca-certificates/6550.pem within VM
functional_test.go:1840: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "sudo cat /usr/share/ca-certificates/6550.pem"
functional_test.go:1839: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1840: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1866: Checking for existence of /etc/ssl/certs/65502.pem within VM
functional_test.go:1867: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "sudo cat /etc/ssl/certs/65502.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1866: Checking for existence of /usr/share/ca-certificates/65502.pem within VM
functional_test.go:1867: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "sudo cat /usr/share/ca-certificates/65502.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1866: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1867: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.34s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-20220221083056-6550 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1894: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1894: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "sudo systemctl is-active crio": exit status 1 (380.879029ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1280: (dbg) Run:  out/minikube-linux-amd64 profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2126: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2140: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2140: (dbg) Done: out/minikube-linux-amd64 -p functional-20220221083056-6550 version -o=json --components: (1.167722395s)
--- PASS: TestFunctional/parallel/Version/components (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: Took "421.581534ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1334: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1339: Took "76.321037ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 image ls --format short

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220221083056-6550 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.23.4
k8s.gcr.io/kube-proxy:v1.23.4
k8s.gcr.io/kube-controller-manager:v1.23.4
k8s.gcr.io/kube-apiserver:v1.23.4
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20220221083056-6550
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20220221083056-6550
docker.io/kubernetesui/metrics-scraper:v1.0.7
docker.io/kubernetesui/dashboard:v2.3.1
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 image ls --format table

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220221083056-6550 image ls --format table:
|---------------------------------------------|--------------------------------|---------------|--------|
|                    Image                    |              Tag               |   Image ID    |  Size  |
|---------------------------------------------|--------------------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-20220221083056-6550 | 1c6a4b268d30a | 30B    |
| k8s.gcr.io/kube-scheduler                   | v1.23.4                        | aceacb6244f9f | 53.5MB |
| k8s.gcr.io/etcd                             | 3.5.1-0                        | 25f8c7f3da61c | 293MB  |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                         | a4ca41631cc7a | 46.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                   | 56cc512116c8f | 4.4MB  |
| docker.io/library/mysql                     | 5.7                            | 4181d485f6500 | 448MB  |
| k8s.gcr.io/kube-apiserver                   | v1.23.4                        | 62930710c9634 | 135MB  |
| k8s.gcr.io/kube-controller-manager          | v1.23.4                        | 25444908517a5 | 125MB  |
| k8s.gcr.io/pause                            | 3.6                            | 6270bb605e12e | 683kB  |
| gcr.io/google-containers/addon-resizer      | functional-20220221083056-6550 | ffd4cfbbe753e | 32.9MB |
| k8s.gcr.io/pause                            | 3.1                            | da86e6ba6ca19 | 742kB  |
| k8s.gcr.io/echoserver                       | 1.8                            | 82e4c8a736a4f | 95.4MB |
| k8s.gcr.io/kube-proxy                       | v1.23.4                        | 2114245ec4d6b | 112MB  |
| docker.io/library/nginx                     | alpine                         | bef258acf10dc | 23.4MB |
| docker.io/kubernetesui/metrics-scraper      | v1.0.7                         | 7801cfc6d5c07 | 34.4MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                             | 6e38f40d628db | 31.5MB |
| k8s.gcr.io/pause                            | 3.3                            | 0184c1613d929 | 683kB  |
| docker.io/library/nginx                     | latest                         | c316d5a335a5c | 142MB  |
| docker.io/kubernetesui/dashboard            | v2.3.1                         | e1482a24335a6 | 220MB  |
| k8s.gcr.io/pause                            | latest                         | 350b164e7ae1d | 240kB  |
|---------------------------------------------|--------------------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 image ls --format json

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220221083056-6550 image ls --format json:
[{"id":"2114245ec4d6bfb19bc69c3d72cfc2702f285040ceaf3b3d16deb67e0c3f53de","repoDigests":[],"repoTags":["k8s.gcr.io/kube-proxy:v1.23.4"],"size":"112000000"},{"id":"aceacb6244f9f92ae8f084a4fbcc78cc67c3d6cb7eda3c6b6773c8d099b05ade","repoDigests":[],"repoTags":["k8s.gcr.io/kube-scheduler:v1.23.4"],"size":"53500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"1c6a4b268d30a95cea8b7c96515ca66999dd279261276af3c78f6545cfa24573","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220221083056-6550"],"size":"30"},{"id":"62930710c9634e1f7e53327a68b7b73fb81745817bbc1af3cfc17bba49e2029d","repoDigests":[],"repoTags":["k8s.gcr.io/kube-apiserver:v1.23.4"],"size":"135000000"},{"id":"bef258acf10dc257d641c47c3a600c92f87be4b4ce4a5e4752b3eade7533dcd9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23400000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5
889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"e1482a24335a6e76d438ae175f79409004588570d3e5dbb4c8140e025e848570","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:v2.3.1"],"size":"220000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"4181d485f6500849992cc568b26cfe13d98a7a2f995bc49a3e47b2fedf6468fe","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"448000000"},{"id":"25444908517a59c7cdc07534d3d71c3abe29c66305eb0254c668e881018b4c5f","repoDigests":[],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.23.4"],"size":"125000000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pa
use:latest"],"size":"240000"},{"id":"c316d5a335a5cf324b0dc83b3da82d7608724769f6454f6d9a621f3ec2534a5a","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d","repoDigests":[],"repoTags":["k8s.gcr.io/etcd:3.5.1-0"],"size":"293000000"},{"id":"a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":[],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"46800000"},{"id":"7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:v1.0.7"],"size":"34400000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220221083056-6550"],"size":"32900000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22
b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 image ls --format yaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220221083056-6550 image ls --format yaml:
- id: 25444908517a59c7cdc07534d3d71c3abe29c66305eb0254c668e881018b4c5f
repoDigests: []
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.23.4
size: "125000000"
- id: aceacb6244f9f92ae8f084a4fbcc78cc67c3d6cb7eda3c6b6773c8d099b05ade
repoDigests: []
repoTags:
- k8s.gcr.io/kube-scheduler:v1.23.4
size: "53500000"
- id: bef258acf10dc257d641c47c3a600c92f87be4b4ce4a5e4752b3eade7533dcd9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23400000"
- id: 1c6a4b268d30a95cea8b7c96515ca66999dd279261276af3c78f6545cfa24573
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220221083056-6550
size: "30"
- id: 62930710c9634e1f7e53327a68b7b73fb81745817bbc1af3cfc17bba49e2029d
repoDigests: []
repoTags:
- k8s.gcr.io/kube-apiserver:v1.23.4
size: "135000000"
- id: c316d5a335a5cf324b0dc83b3da82d7608724769f6454f6d9a621f3ec2534a5a
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d
repoDigests: []
repoTags:
- k8s.gcr.io/etcd:3.5.1-0
size: "293000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220221083056-6550
size: "32900000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 2114245ec4d6bfb19bc69c3d72cfc2702f285040ceaf3b3d16deb67e0c3f53de
repoDigests: []
repoTags:
- k8s.gcr.io/kube-proxy:v1.23.4
size: "112000000"
- id: a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests: []
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "46800000"
- id: e1482a24335a6e76d438ae175f79409004588570d3e5dbb4c8140e025e848570
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:v2.3.1
size: "220000000"
- id: 7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:v1.0.7
size: "34400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 4181d485f6500849992cc568b26cfe13d98a7a2f995bc49a3e47b2fedf6468fe
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "448000000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh pgrep buildkitd: exit status 1 (429.718149ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 image build -t localhost/my-image:functional-20220221083056-6550 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p functional-20220221083056-6550 image build -t localhost/my-image:functional-20220221083056-6550 testdata/build: (2.361877572s)
functional_test.go:316: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220221083056-6550 image build -t localhost/my-image:functional-20220221083056-6550 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in c94e52448441
Removing intermediate container c94e52448441
---> 9023514c8e8e
Step 3/3 : ADD content.txt /
---> ea5884b8ad43
Successfully built ea5884b8ad43
Successfully tagged localhost/my-image:functional-20220221083056-6550
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.591385032s)
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220221083056-6550
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.64s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1371: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: Took "404.770268ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1384: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1389: Took "62.659602ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20220221083056-6550 docker-env) && out/minikube-linux-amd64 status -p functional-20220221083056-6550"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/bash
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20220221083056-6550 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1986: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1986: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1986: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220221083056-6550

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p functional-20220221083056-6550 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220221083056-6550: (3.455943179s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220221083056-6550
functional_test.go:361: (dbg) Done: out/minikube-linux-amd64 -p functional-20220221083056-6550 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220221083056-6550: (2.804634501s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:231: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.15632346s)
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220221083056-6550
functional_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220221083056-6550

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p functional-20220221083056-6550 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220221083056-6550: (5.886549361s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 image save gcr.io/google-containers/addon-resizer:functional-20220221083056-6550 /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar
functional_test.go:376: (dbg) Done: out/minikube-linux-amd64 -p functional-20220221083056-6550 image save gcr.io/google-containers/addon-resizer:functional-20220221083056-6550 /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar: (2.116742269s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 image rm gcr.io/google-containers/addon-resizer:functional-20220221083056-6550
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 image load /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220221083056-6550
functional_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220221083056-6550

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p functional-20220221083056-6550 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220221083056-6550: (2.649272259s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220221083056-6550
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.72s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:128: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20220221083056-6550 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:148: (dbg) Run:  kubectl --context functional-20220221083056-6550 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:152: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:343: "nginx-svc" [205d70b3-6416-4912-865f-b41c560c2497] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:343: "nginx-svc" [205d70b3-6416-4912-865f-b41c560c2497] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:152: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 14.00739565s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:76: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220221083056-6550 /tmp/mounttest4202658807:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:110: wrote "test-1645432379861750446" to /tmp/mounttest4202658807/created-by-test
functional_test_mount_test.go:110: wrote "test-1645432379861750446" to /tmp/mounttest4202658807/created-by-test-removed-by-pod
functional_test_mount_test.go:110: wrote "test-1645432379861750446" to /tmp/mounttest4202658807/test-1645432379861750446
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (377.607743ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:136: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 21 08:32 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 21 08:32 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 21 08:32 test-1645432379861750446
functional_test_mount_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh cat /mount-9p/test-1645432379861750446

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:151: (dbg) Run:  kubectl --context functional-20220221083056-6550 replace --force -f testdata/busybox-mount-test.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:343: "busybox-mount" [bcd85413-3f74-4a93-9045-92d8780ba4c0] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [bcd85413-3f74-4a93-9045-92d8780ba4c0] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [bcd85413-3f74-4a93-9045-92d8780ba4c0] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.0066182s
functional_test_mount_test.go:172: (dbg) Run:  kubectl --context functional-20220221083056-6550 logs busybox-mount
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:93: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:97: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220221083056-6550 /tmp/mounttest4202658807:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.57s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:225: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220221083056-6550 /tmp/mounttest1549300676:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (371.566894ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:269: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:273: guest mount directory contents
total 0
functional_test_mount_test.go:275: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220221083056-6550 /tmp/mounttest1549300676:/mount-9p --alsologtostderr -v=1 --port 46464] ...

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:276: reading mount text
functional_test_mount_test.go:290: done reading mount text
functional_test_mount_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:242: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh "sudo umount -f /mount-9p": exit status 1 (385.067231ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:244: "out/minikube-linux-amd64 -p functional-20220221083056-6550 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:246: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220221083056-6550 /tmp/mounttest1549300676:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220221083056-6550 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:235: tunnel at http://10.111.40.165 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:370: (dbg) stopping [out/minikube-linux-amd64 -p functional-20220221083056-6550 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.1s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220221083056-6550
--- PASS: TestFunctional/delete_addon-resizer_images (0.10s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220221083056-6550
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220221083056-6550
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (56.35s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:40: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-20220221083319-6550 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0221 08:33:29.173867    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory
E0221 08:33:29.179430    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory
E0221 08:33:29.189715    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory
E0221 08:33:29.209998    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory
E0221 08:33:29.250349    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory
E0221 08:33:29.331096    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory
E0221 08:33:29.491485    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory
E0221 08:33:29.812129    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory
E0221 08:33:30.453076    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory
E0221 08:33:31.733298    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory
E0221 08:33:34.295065    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory
E0221 08:33:39.415918    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory
E0221 08:33:49.656769    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory
E0221 08:34:10.137717    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory
ingress_addon_legacy_test.go:40: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-20220221083319-6550 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (56.351456358s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (56.35s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.15s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220221083319-6550 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:71: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220221083319-6550 addons enable ingress --alsologtostderr -v=5: (17.154201568s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.15s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.4s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:80: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220221083319-6550 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.40s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (38.1s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:163: (dbg) Run:  kubectl --context ingress-addon-legacy-20220221083319-6550 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:163: (dbg) Done: kubectl --context ingress-addon-legacy-20220221083319-6550 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.546885035s)
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20220221083319-6550 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:196: (dbg) Run:  kubectl --context ingress-addon-legacy-20220221083319-6550 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [7a4afd79-d1a4-480a-8520-2efa86fc7de1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0221 08:34:51.098131    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory
helpers_test.go:343: "nginx" [7a4afd79-d1a4-480a-8520-2efa86fc7de1] Running
addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 12.005882201s
addons_test.go:213: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220221083319-6550 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:237: (dbg) Run:  kubectl --context ingress-addon-legacy-20220221083319-6550 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220221083319-6550 ip
addons_test.go:248: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220221083319-6550 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:257: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220221083319-6550 addons disable ingress-dns --alsologtostderr -v=1: (1.883277796s)
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220221083319-6550 addons disable ingress --alsologtostderr -v=1
addons_test.go:262: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220221083319-6550 addons disable ingress --alsologtostderr -v=1: (7.361315494s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (38.10s)

                                                
                                    
x
+
TestJSONOutput/start/Command (44.04s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-20220221083514-6550 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-20220221083514-6550 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (44.03886183s)
--- PASS: TestJSONOutput/start/Command (44.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-20220221083514-6550 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-20220221083514-6550 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.91s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-20220221083514-6550 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-20220221083514-6550 --output=json --user=testUser: (10.909934352s)
--- PASS: TestJSONOutput/stop/Command (10.91s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.29s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20220221083612-6550 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20220221083612-6550 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (65.735478ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2ff00846-ac27-4f7c-ae9b-e59c1c1c9e2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220221083612-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8364300f-7b86-4b3f-bc3b-6b5b8d6a7f61","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13641"}}
	{"specversion":"1.0","id":"db162834-5758-4600-82fc-09fd88707978","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c36fe10b-aac4-4c30-96b3-049a72498e73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig"}}
	{"specversion":"1.0","id":"e71e9bcc-bda6-4bbb-ab53-9e53f74711b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube"}}
	{"specversion":"1.0","id":"51d14628-55eb-4e87-9312-5353cc05c477","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9b819a67-ebd7-436c-9f3d-1912fe3a1c1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-20220221083612-6550" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20220221083612-6550
--- PASS: TestErrorJSONOutput (0.29s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (29.1s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220221083612-6550 --network=
E0221 08:36:13.019323    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory
kic_custom_network_test.go:58: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220221083612-6550 --network=: (26.720094217s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20220221083612-6550" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220221083612-6550
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220221083612-6550: (2.340929885s)
--- PASS: TestKicCustomNetwork/create_custom_network (29.10s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (29.08s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220221083641-6550 --network=bridge
kic_custom_network_test.go:58: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220221083641-6550 --network=bridge: (26.920226614s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20220221083641-6550" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220221083641-6550
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220221083641-6550: (2.125343485s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (29.08s)

                                                
                                    
x
+
TestKicExistingNetwork (29.89s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:94: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-20220221083710-6550 --network=existing-network
E0221 08:37:30.568392    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory
E0221 08:37:30.573642    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory
E0221 08:37:30.583911    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory
E0221 08:37:30.604156    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory
E0221 08:37:30.644394    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory
E0221 08:37:30.724701    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory
E0221 08:37:30.885153    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory
E0221 08:37:31.205675    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory
E0221 08:37:31.846197    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory
E0221 08:37:33.126616    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory
E0221 08:37:35.687737    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory
kic_custom_network_test.go:94: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-20220221083710-6550 --network=existing-network: (27.340692947s)
helpers_test.go:176: Cleaning up "existing-network-20220221083710-6550" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-20220221083710-6550
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-20220221083710-6550: (2.328754275s)
--- PASS: TestKicExistingNetwork (29.89s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.77s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-20220221083740-6550 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
E0221 08:37:40.808660    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory
mount_start_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-20220221083740-6550 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (4.767347382s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.77s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-20220221083740-6550 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.33s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.82s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220221083740-6550 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
E0221 08:37:51.049843    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory
mount_start_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220221083740-6550 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (4.81840053s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.34s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220221083740-6550 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.34s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.75s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:133: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-20220221083740-6550 --alsologtostderr -v=5
pause_test.go:133: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-20220221083740-6550 --alsologtostderr -v=5: (1.750220679s)
--- PASS: TestMountStart/serial/DeleteFirst (1.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220221083740-6550 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.33s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:156: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-20220221083740-6550
mount_start_test.go:156: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-20220221083740-6550: (1.271263344s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.97s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:167: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220221083740-6550
mount_start_test.go:167: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220221083740-6550: (5.971122248s)
--- PASS: TestMountStart/serial/RestartStopped (6.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220221083740-6550 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (86.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220221083805-6550 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0221 08:38:11.530782    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory
E0221 08:38:29.174379    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory
E0221 08:38:52.491968    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory
E0221 08:38:56.860358    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220221083805-6550 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m25.52919136s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (86.12s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220221083805-6550 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:491: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220221083805-6550 -- rollout status deployment/busybox
E0221 08:39:33.149332    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory
E0221 08:39:33.154593    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory
E0221 08:39:33.164842    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory
E0221 08:39:33.185114    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory
E0221 08:39:33.225440    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory
E0221 08:39:33.305757    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory
E0221 08:39:33.466032    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory
E0221 08:39:33.786580    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory
E0221 08:39:34.427510    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory
multinode_test.go:491: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20220221083805-6550 -- rollout status deployment/busybox: (3.291947725s)
multinode_test.go:497: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220221083805-6550 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220221083805-6550 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:517: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220221083805-6550 -- exec busybox-7978565885-6dg6b -- nslookup kubernetes.io
E0221 08:39:35.708189    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory
multinode_test.go:517: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220221083805-6550 -- exec busybox-7978565885-pxmsk -- nslookup kubernetes.io
multinode_test.go:527: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220221083805-6550 -- exec busybox-7978565885-6dg6b -- nslookup kubernetes.default
multinode_test.go:527: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220221083805-6550 -- exec busybox-7978565885-pxmsk -- nslookup kubernetes.default
multinode_test.go:535: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220221083805-6550 -- exec busybox-7978565885-6dg6b -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:535: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220221083805-6550 -- exec busybox-7978565885-pxmsk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.42s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:545: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220221083805-6550 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:553: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220221083805-6550 -- exec busybox-7978565885-6dg6b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:561: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220221083805-6550 -- exec busybox-7978565885-6dg6b -- sh -c "ping -c 1 192.168.49.1"
multinode_test.go:553: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220221083805-6550 -- exec busybox-7978565885-pxmsk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:561: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220221083805-6550 -- exec busybox-7978565885-pxmsk -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (28.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220221083805-6550 -v 3 --alsologtostderr
E0221 08:39:38.268990    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory
E0221 08:39:43.389434    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory
E0221 08:39:53.629578    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20220221083805-6550 -v 3 --alsologtostderr: (27.497783892s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (28.28s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (12s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 status --output json --alsologtostderr
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 cp testdata/cp-test.txt multinode-20220221083805-6550:/home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 ssh -n multinode-20220221083805-6550 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 cp multinode-20220221083805-6550:/home/docker/cp-test.txt /tmp/mk_cp_test2552639775/cp-test_multinode-20220221083805-6550.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 ssh -n multinode-20220221083805-6550 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 cp multinode-20220221083805-6550:/home/docker/cp-test.txt multinode-20220221083805-6550-m02:/home/docker/cp-test_multinode-20220221083805-6550_multinode-20220221083805-6550-m02.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 ssh -n multinode-20220221083805-6550 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 ssh -n multinode-20220221083805-6550-m02 "sudo cat /home/docker/cp-test_multinode-20220221083805-6550_multinode-20220221083805-6550-m02.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 cp multinode-20220221083805-6550:/home/docker/cp-test.txt multinode-20220221083805-6550-m03:/home/docker/cp-test_multinode-20220221083805-6550_multinode-20220221083805-6550-m03.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 ssh -n multinode-20220221083805-6550 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 ssh -n multinode-20220221083805-6550-m03 "sudo cat /home/docker/cp-test_multinode-20220221083805-6550_multinode-20220221083805-6550-m03.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 cp testdata/cp-test.txt multinode-20220221083805-6550-m02:/home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 ssh -n multinode-20220221083805-6550-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 cp multinode-20220221083805-6550-m02:/home/docker/cp-test.txt /tmp/mk_cp_test2552639775/cp-test_multinode-20220221083805-6550-m02.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 ssh -n multinode-20220221083805-6550-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 cp multinode-20220221083805-6550-m02:/home/docker/cp-test.txt multinode-20220221083805-6550:/home/docker/cp-test_multinode-20220221083805-6550-m02_multinode-20220221083805-6550.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 ssh -n multinode-20220221083805-6550-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 ssh -n multinode-20220221083805-6550 "sudo cat /home/docker/cp-test_multinode-20220221083805-6550-m02_multinode-20220221083805-6550.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 cp multinode-20220221083805-6550-m02:/home/docker/cp-test.txt multinode-20220221083805-6550-m03:/home/docker/cp-test_multinode-20220221083805-6550-m02_multinode-20220221083805-6550-m03.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 ssh -n multinode-20220221083805-6550-m02 "sudo cat /home/docker/cp-test.txt"
E0221 08:40:14.109781    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 ssh -n multinode-20220221083805-6550-m03 "sudo cat /home/docker/cp-test_multinode-20220221083805-6550-m02_multinode-20220221083805-6550-m03.txt"
E0221 08:40:14.412616    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 cp testdata/cp-test.txt multinode-20220221083805-6550-m03:/home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 ssh -n multinode-20220221083805-6550-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 cp multinode-20220221083805-6550-m03:/home/docker/cp-test.txt /tmp/mk_cp_test2552639775/cp-test_multinode-20220221083805-6550-m03.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 ssh -n multinode-20220221083805-6550-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 cp multinode-20220221083805-6550-m03:/home/docker/cp-test.txt multinode-20220221083805-6550:/home/docker/cp-test_multinode-20220221083805-6550-m03_multinode-20220221083805-6550.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 ssh -n multinode-20220221083805-6550-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 ssh -n multinode-20220221083805-6550 "sudo cat /home/docker/cp-test_multinode-20220221083805-6550-m03_multinode-20220221083805-6550.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 cp multinode-20220221083805-6550-m03:/home/docker/cp-test.txt multinode-20220221083805-6550-m02:/home/docker/cp-test_multinode-20220221083805-6550-m03_multinode-20220221083805-6550-m02.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 ssh -n multinode-20220221083805-6550-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 ssh -n multinode-20220221083805-6550-m02 "sudo cat /home/docker/cp-test_multinode-20220221083805-6550-m03_multinode-20220221083805-6550-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (12.00s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:215: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 node stop m03
multinode_test.go:215: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220221083805-6550 node stop m03: (1.276479943s)
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 status
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220221083805-6550 status: exit status 7 (621.77455ms)

                                                
                                                
-- stdout --
	multinode-20220221083805-6550
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220221083805-6550-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220221083805-6550-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 status --alsologtostderr
multinode_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220221083805-6550 status --alsologtostderr: exit status 7 (626.323283ms)

                                                
                                                
-- stdout --
	multinode-20220221083805-6550
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220221083805-6550-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220221083805-6550-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0221 08:40:20.278188   93035 out.go:297] Setting OutFile to fd 1 ...
	I0221 08:40:20.278711   93035 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0221 08:40:20.278724   93035 out.go:310] Setting ErrFile to fd 2...
	I0221 08:40:20.278731   93035 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0221 08:40:20.278986   93035 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/bin
	I0221 08:40:20.279265   93035 out.go:304] Setting JSON to false
	I0221 08:40:20.279284   93035 mustload.go:65] Loading cluster: multinode-20220221083805-6550
	I0221 08:40:20.279975   93035 config.go:176] Loaded profile config "multinode-20220221083805-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4
	I0221 08:40:20.279996   93035 status.go:253] checking status of multinode-20220221083805-6550 ...
	I0221 08:40:20.280398   93035 cli_runner.go:133] Run: docker container inspect multinode-20220221083805-6550 --format={{.State.Status}}
	I0221 08:40:20.313486   93035 status.go:328] multinode-20220221083805-6550 host status = "Running" (err=<nil>)
	I0221 08:40:20.313517   93035 host.go:66] Checking if "multinode-20220221083805-6550" exists ...
	I0221 08:40:20.313768   93035 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220221083805-6550
	I0221 08:40:20.346034   93035 host.go:66] Checking if "multinode-20220221083805-6550" exists ...
	I0221 08:40:20.346311   93035 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0221 08:40:20.346350   93035 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220221083805-6550
	I0221 08:40:20.379881   93035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49212 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/multinode-20220221083805-6550/id_rsa Username:docker}
	I0221 08:40:20.468022   93035 ssh_runner.go:195] Run: systemctl --version
	I0221 08:40:20.471790   93035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0221 08:40:20.480879   93035 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0221 08:40:20.569876   93035 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:67 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-02-21 08:40:20.510577488 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0221 08:40:20.570802   93035 kubeconfig.go:92] found "multinode-20220221083805-6550" server: "https://192.168.49.2:8443"
	I0221 08:40:20.570824   93035 api_server.go:165] Checking apiserver status ...
	I0221 08:40:20.570851   93035 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0221 08:40:20.590679   93035 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1717/cgroup
	I0221 08:40:20.598113   93035 api_server.go:181] apiserver freezer: "9:freezer:/docker/a10990c235c4a56bc8a10787c5238205d1b01fe9300339ebfb3dfeebd8121c25/kubepods/burstable/pod8145f90dc270d9683ad72fcdce51fc35/a1b54a96554a324ea7654d9d90d70e9a6001f2fb6ba0160345df4a080bbdd228"
	I0221 08:40:20.598188   93035 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a10990c235c4a56bc8a10787c5238205d1b01fe9300339ebfb3dfeebd8121c25/kubepods/burstable/pod8145f90dc270d9683ad72fcdce51fc35/a1b54a96554a324ea7654d9d90d70e9a6001f2fb6ba0160345df4a080bbdd228/freezer.state
	I0221 08:40:20.604501   93035 api_server.go:203] freezer state: "THAWED"
	I0221 08:40:20.604528   93035 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0221 08:40:20.609206   93035 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0221 08:40:20.609234   93035 status.go:419] multinode-20220221083805-6550 apiserver status = Running (err=<nil>)
	I0221 08:40:20.609243   93035 status.go:255] multinode-20220221083805-6550 status: &{Name:multinode-20220221083805-6550 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0221 08:40:20.609259   93035 status.go:253] checking status of multinode-20220221083805-6550-m02 ...
	I0221 08:40:20.609533   93035 cli_runner.go:133] Run: docker container inspect multinode-20220221083805-6550-m02 --format={{.State.Status}}
	I0221 08:40:20.642478   93035 status.go:328] multinode-20220221083805-6550-m02 host status = "Running" (err=<nil>)
	I0221 08:40:20.642508   93035 host.go:66] Checking if "multinode-20220221083805-6550-m02" exists ...
	I0221 08:40:20.642737   93035 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220221083805-6550-m02
	I0221 08:40:20.675407   93035 host.go:66] Checking if "multinode-20220221083805-6550-m02" exists ...
	I0221 08:40:20.675657   93035 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0221 08:40:20.675696   93035 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220221083805-6550-m02
	I0221 08:40:20.709687   93035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49217 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/machines/multinode-20220221083805-6550-m02/id_rsa Username:docker}
	I0221 08:40:20.799483   93035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0221 08:40:20.808477   93035 status.go:255] multinode-20220221083805-6550-m02 status: &{Name:multinode-20220221083805-6550-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0221 08:40:20.808529   93035 status.go:253] checking status of multinode-20220221083805-6550-m03 ...
	I0221 08:40:20.808793   93035 cli_runner.go:133] Run: docker container inspect multinode-20220221083805-6550-m03 --format={{.State.Status}}
	I0221 08:40:20.842951   93035 status.go:328] multinode-20220221083805-6550-m03 host status = "Stopped" (err=<nil>)
	I0221 08:40:20.842974   93035 status.go:341] host is not running, skipping remaining checks
	I0221 08:40:20.842979   93035 status.go:255] multinode-20220221083805-6550-m03 status: &{Name:multinode-20220221083805-6550-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.53s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (24.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:249: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 node start m03 --alsologtostderr
multinode_test.go:259: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220221083805-6550 node start m03 --alsologtostderr: (23.788507041s)
multinode_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 status
multinode_test.go:280: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (24.65s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (103.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220221083805-6550
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-20220221083805-6550
E0221 08:40:55.071150    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-20220221083805-6550: (22.60284662s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220221083805-6550 --wait=true -v=8 --alsologtostderr
E0221 08:42:16.991866    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory
multinode_test.go:300: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220221083805-6550 --wait=true -v=8 --alsologtostderr: (1m20.885119297s)
multinode_test.go:305: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220221083805-6550
--- PASS: TestMultiNode/serial/RestartKeepsNodes (103.61s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:399: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 node delete m03
E0221 08:42:30.568483    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory
multinode_test.go:399: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220221083805-6550 node delete m03: (4.644221911s)
multinode_test.go:405: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 status --alsologtostderr
multinode_test.go:419: (dbg) Run:  docker volume ls
multinode_test.go:429: (dbg) Run:  kubectl get nodes
multinode_test.go:437: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.37s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:319: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 stop
multinode_test.go:319: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220221083805-6550 stop: (21.438919865s)
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 status
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220221083805-6550 status: exit status 7 (124.92685ms)

                                                
                                                
-- stdout --
	multinode-20220221083805-6550
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220221083805-6550-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 status --alsologtostderr
multinode_test.go:332: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220221083805-6550 status --alsologtostderr: exit status 7 (122.29282ms)

                                                
                                                
-- stdout --
	multinode-20220221083805-6550
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220221083805-6550-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0221 08:42:56.100608  106605 out.go:297] Setting OutFile to fd 1 ...
	I0221 08:42:56.100688  106605 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0221 08:42:56.100693  106605 out.go:310] Setting ErrFile to fd 2...
	I0221 08:42:56.100699  106605 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0221 08:42:56.100817  106605 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/bin
	I0221 08:42:56.101013  106605 out.go:304] Setting JSON to false
	I0221 08:42:56.101032  106605 mustload.go:65] Loading cluster: multinode-20220221083805-6550
	I0221 08:42:56.101412  106605 config.go:176] Loaded profile config "multinode-20220221083805-6550": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.4
	I0221 08:42:56.101434  106605 status.go:253] checking status of multinode-20220221083805-6550 ...
	I0221 08:42:56.101848  106605 cli_runner.go:133] Run: docker container inspect multinode-20220221083805-6550 --format={{.State.Status}}
	I0221 08:42:56.133159  106605 status.go:328] multinode-20220221083805-6550 host status = "Stopped" (err=<nil>)
	I0221 08:42:56.133183  106605 status.go:341] host is not running, skipping remaining checks
	I0221 08:42:56.133189  106605 status.go:255] multinode-20220221083805-6550 status: &{Name:multinode-20220221083805-6550 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0221 08:42:56.133210  106605 status.go:253] checking status of multinode-20220221083805-6550-m02 ...
	I0221 08:42:56.133471  106605 cli_runner.go:133] Run: docker container inspect multinode-20220221083805-6550-m02 --format={{.State.Status}}
	I0221 08:42:56.164963  106605 status.go:328] multinode-20220221083805-6550-m02 host status = "Stopped" (err=<nil>)
	I0221 08:42:56.164983  106605 status.go:341] host is not running, skipping remaining checks
	I0221 08:42:56.164989  106605 status.go:255] multinode-20220221083805-6550-m02 status: &{Name:multinode-20220221083805-6550-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.69s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (59.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:349: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:359: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220221083805-6550 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0221 08:42:58.252870    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory
E0221 08:43:29.174124    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory
multinode_test.go:359: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220221083805-6550 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (59.246707808s)
multinode_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220221083805-6550 status --alsologtostderr
multinode_test.go:379: (dbg) Run:  kubectl get nodes
multinode_test.go:387: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (59.97s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (29.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:448: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220221083805-6550
multinode_test.go:457: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220221083805-6550-m02 --driver=docker  --container-runtime=docker
multinode_test.go:457: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20220221083805-6550-m02 --driver=docker  --container-runtime=docker: exit status 14 (74.777662ms)

                                                
                                                
-- stdout --
	* [multinode-20220221083805-6550-m02] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13641
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220221083805-6550-m02' is duplicated with machine name 'multinode-20220221083805-6550-m02' in profile 'multinode-20220221083805-6550'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220221083805-6550-m03 --driver=docker  --container-runtime=docker
multinode_test.go:465: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220221083805-6550-m03 --driver=docker  --container-runtime=docker: (27.006036804s)
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220221083805-6550
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20220221083805-6550: exit status 80 (351.580979ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220221083805-6550
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220221083805-6550-m03 already exists in multinode-20220221083805-6550-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:477: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20220221083805-6550-m03
multinode_test.go:477: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20220221083805-6550-m03: (2.347126652s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (29.84s)

                                                
                                    
x
+
TestPreload (115.7s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220221084430-6550 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.0
E0221 08:44:33.148477    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory
E0221 08:45:00.832781    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220221084430-6550 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.0: (1m19.649106222s)
preload_test.go:62: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220221084430-6550 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:62: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20220221084430-6550 -- docker pull gcr.io/k8s-minikube/busybox: (1.643286739s)
preload_test.go:72: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220221084430-6550 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.3
preload_test.go:72: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220221084430-6550 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.3: (31.576285286s)
preload_test.go:81: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220221084430-6550 -- docker images
helpers_test.go:176: Cleaning up "test-preload-20220221084430-6550" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20220221084430-6550
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20220221084430-6550: (2.458961769s)
--- PASS: TestPreload (115.70s)

                                                
                                    
x
+
TestScheduledStopUnix (100.2s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:129: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20220221084626-6550 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:129: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20220221084626-6550 --memory=2048 --driver=docker  --container-runtime=docker: (26.540480522s)
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220221084626-6550 --schedule 5m
scheduled_stop_test.go:192: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220221084626-6550 -n scheduled-stop-20220221084626-6550
scheduled_stop_test.go:170: signal error was:  <nil>
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220221084626-6550 --schedule 15s
scheduled_stop_test.go:170: signal error was:  os: process already finished
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220221084626-6550 --cancel-scheduled
scheduled_stop_test.go:177: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220221084626-6550 -n scheduled-stop-20220221084626-6550
scheduled_stop_test.go:206: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220221084626-6550
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220221084626-6550 --schedule 15s
E0221 08:47:30.569702    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory
scheduled_stop_test.go:170: signal error was:  os: process already finished
scheduled_stop_test.go:206: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220221084626-6550
scheduled_stop_test.go:206: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-20220221084626-6550: exit status 7 (90.984348ms)

                                                
                                                
-- stdout --
	scheduled-stop-20220221084626-6550
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:177: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220221084626-6550 -n scheduled-stop-20220221084626-6550
scheduled_stop_test.go:177: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220221084626-6550 -n scheduled-stop-20220221084626-6550: exit status 7 (94.406995ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:177: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-20220221084626-6550" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20220221084626-6550
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-20220221084626-6550: (1.894418949s)
--- PASS: TestScheduledStopUnix (100.20s)

                                                
                                    
x
+
TestSkaffold (72.09s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:57: (dbg) Run:  /tmp/skaffold.exe2704910771 version
skaffold_test.go:61: skaffold version: v1.35.2
skaffold_test.go:64: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-20220221084806-6550 --memory=2600 --driver=docker  --container-runtime=docker
E0221 08:48:29.174559    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory
skaffold_test.go:64: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-20220221084806-6550 --memory=2600 --driver=docker  --container-runtime=docker: (26.383443302s)
skaffold_test.go:84: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:108: (dbg) Run:  /tmp/skaffold.exe2704910771 run --minikube-profile skaffold-20220221084806-6550 --kube-context skaffold-20220221084806-6550 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:108: (dbg) Done: /tmp/skaffold.exe2704910771 run --minikube-profile skaffold-20220221084806-6550 --kube-context skaffold-20220221084806-6550 --status-check=true --port-forward=false --interactive=false: (32.448919915s)
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:343: "leeroy-app-97df96546-g5gw9" [96e363e4-9129-462b-942b-cc4227f256e8] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-app healthy within 5.011007515s
skaffold_test.go:117: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:343: "leeroy-web-755869c6cd-8rvzx" [43164fde-9927-470b-a814-d0cb93fd37bf] Running
skaffold_test.go:117: (dbg) TestSkaffold: app=leeroy-web healthy within 5.006362064s
helpers_test.go:176: Cleaning up "skaffold-20220221084806-6550" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-20220221084806-6550
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-20220221084806-6550: (2.539499635s)
--- PASS: TestSkaffold (72.09s)

                                                
                                    
x
+
TestInsufficientStorage (15.21s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-20220221084918-6550 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:51: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-20220221084918-6550 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (12.536910413s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5859708e-ebba-4716-a2d4-c2c4920a1e7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220221084918-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"538e945c-032f-491f-a044-eccd4bcf8fe8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13641"}}
	{"specversion":"1.0","id":"6c2b35dd-d32b-4b5a-a555-7a85fef61e1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f35b79ec-7571-44d9-af0b-fa2c632f24a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig"}}
	{"specversion":"1.0","id":"8e169f6c-8d57-4d43-acb1-47021a826c25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube"}}
	{"specversion":"1.0","id":"2c61e3d2-5ea3-44fd-bafc-c6729ccb1bba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"6284dcba-c2fb-488b-9a4c-191b1d5440a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"4984f709-783d-48c6-8eaf-5abac16d31ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3c90423e-8d70-41e7-9a6d-742d0b2b22fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"Your cgroup does not allow setting memory."}}
	{"specversion":"1.0","id":"abd4b94d-7a96-4ca4-bc9d-3f7c89b17ab7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities"}}
	{"specversion":"1.0","id":"84b2c6d0-0cc3-4593-a200-3e4d240ec803","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220221084918-6550 in cluster insufficient-storage-20220221084918-6550","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"75897f30-7037-45cf-8202-b906844bd7eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"ea5e25e6-7ad2-4e18-a882-4f278b2ad1e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"3002905e-2ca4-4706-8588-e870af752d95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:77: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220221084918-6550 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220221084918-6550 --output=json --layout=cluster: exit status 7 (354.126224ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220221084918-6550","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.25.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220221084918-6550","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0221 08:49:31.427744  138609 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220221084918-6550" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig

                                                
                                                
** /stderr **
status_test.go:77: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220221084918-6550 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220221084918-6550 --output=json --layout=cluster: exit status 7 (351.028385ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220221084918-6550","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.25.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220221084918-6550","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0221 08:49:31.779073  138709 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220221084918-6550" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig
	E0221 08:49:31.791334  138709 status.go:557] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/insufficient-storage-20220221084918-6550/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-20220221084918-6550" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-20220221084918-6550
E0221 08:49:33.148617    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-20220221084918-6550: (1.967541664s)
--- PASS: TestInsufficientStorage (15.21s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (127.8s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.9.0.1124988089.exe start -p running-upgrade-20220221084933-6550 --memory=2200 --vm-driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /tmp/minikube-v1.9.0.1124988089.exe start -p running-upgrade-20220221084933-6550 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m29.740882811s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20220221084933-6550 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-20220221084933-6550 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (34.628526866s)
helpers_test.go:176: Cleaning up "running-upgrade-20220221084933-6550" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20220221084933-6550

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20220221084933-6550: (2.830409722s)
--- PASS: TestRunningBinaryUpgrade (127.80s)

                                                
                                    
x
+
TestKubernetesUpgrade (107.64s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220221085141-6550 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220221085141-6550 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (53.980967028s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220221085141-6550
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220221085141-6550: (1.362954694s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20220221085141-6550 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20220221085141-6550 status --format={{.Host}}: exit status 7 (98.89977ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220221085141-6550 --memory=2200 --kubernetes-version=v1.23.5-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220221085141-6550 --memory=2200 --kubernetes-version=v1.23.5-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (28.655754657s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220221085141-6550 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220221085141-6550 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220221085141-6550 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker: exit status 106 (79.898977ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220221085141-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13641
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.23.5-rc.0 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20220221085141-6550
	    minikube start -p kubernetes-upgrade-20220221085141-6550 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220221085141-65502 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.23.5-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220221085141-6550 --kubernetes-version=v1.23.5-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220221085141-6550 --memory=2200 --kubernetes-version=v1.23.5-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:282: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220221085141-6550 --memory=2200 --kubernetes-version=v1.23.5-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (16.169707155s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-20220221085141-6550" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220221085141-6550

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220221085141-6550: (7.24090839s)
--- PASS: TestKubernetesUpgrade (107.64s)

                                                
                                    
x
+
TestMissingContainerUpgrade (154.43s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /tmp/minikube-v1.9.1.1745708248.exe start -p missing-upgrade-20220221084933-6550 --memory=2200 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: /tmp/minikube-v1.9.1.1745708248.exe start -p missing-upgrade-20220221084933-6550 --memory=2200 --driver=docker  --container-runtime=docker: (1m29.754236748s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-20220221084933-6550

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20220221084933-6550: (10.517784599s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-20220221084933-6550
version_upgrade_test.go:336: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-20220221084933-6550 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-20220221084933-6550 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (46.545683898s)
helpers_test.go:176: Cleaning up "missing-upgrade-20220221084933-6550" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-20220221084933-6550
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-20220221084933-6550: (6.979946562s)
--- PASS: TestMissingContainerUpgrade (154.43s)

                                                
                                    
x
+
TestPause/serial/Start (48.04s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220221085158-6550 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220221085158-6550 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (48.042023343s)
--- PASS: TestPause/serial/Start (48.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:84: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220221085208-6550 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:84: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-20220221085208-6550 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (109.051431ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220221085208-6550] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13641
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (25.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220221085208-6550 --driver=docker  --container-runtime=docker
E0221 08:52:30.569318    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory
no_kubernetes_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220221085208-6550 --driver=docker  --container-runtime=docker: (25.501896034s)
no_kubernetes_test.go:201: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220221085208-6550 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (25.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:113: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220221085208-6550 --no-kubernetes --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:113: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220221085208-6550 --no-kubernetes --driver=docker  --container-runtime=docker: (14.356059262s)
no_kubernetes_test.go:201: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220221085208-6550 status -o json
no_kubernetes_test.go:201: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-20220221085208-6550 status -o json: exit status 2 (480.144285ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220221085208-6550","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:125: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-20220221085208-6550
no_kubernetes_test.go:125: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-20220221085208-6550: (2.698155884s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.53s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (38.98s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220221085158-6550 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220221085158-6550 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (38.96683484s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (38.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220221085208-6550 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220221085208-6550 --no-kubernetes --driver=docker  --container-runtime=docker: (7.004906892s)
--- PASS: TestNoKubernetes/serial/Start (7.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:148: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220221085208-6550 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:148: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220221085208-6550 "sudo systemctl is-active --quiet service kubelet": exit status 1 (391.391933ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (6.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:170: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:170: (dbg) Done: out/minikube-linux-amd64 profile list: (5.142586841s)
no_kubernetes_test.go:180: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:180: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.048868831s)
--- PASS: TestNoKubernetes/serial/ProfileList (6.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:159: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-20220221085208-6550

                                                
                                                
=== CONT  TestNoKubernetes/serial/Stop
no_kubernetes_test.go:159: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-20220221085208-6550: (1.312518813s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:192: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220221085208-6550 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:192: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220221085208-6550 --driver=docker  --container-runtime=docker: (6.322407974s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:148: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220221085208-6550 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:148: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220221085208-6550 "sudo systemctl is-active --quiet service kubelet": exit status 1 (398.615723ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.52s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (71.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /tmp/minikube-v1.9.0.3521107852.exe start -p stopped-upgrade-20220221085315-6550 --memory=2200 --vm-driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: /tmp/minikube-v1.9.0.3521107852.exe start -p stopped-upgrade-20220221085315-6550 --memory=2200 --vm-driver=docker  --container-runtime=docker: (42.224250293s)
version_upgrade_test.go:199: (dbg) Run:  /tmp/minikube-v1.9.0.3521107852.exe -p stopped-upgrade-20220221085315-6550 stop
version_upgrade_test.go:199: (dbg) Done: /tmp/minikube-v1.9.0.3521107852.exe -p stopped-upgrade-20220221085315-6550 stop: (2.418417317s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-20220221085315-6550 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0221 08:54:05.983915    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory
E0221 08:54:05.989228    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory
E0221 08:54:05.999499    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory
E0221 08:54:06.019761    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory
E0221 08:54:06.060107    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory
E0221 08:54:06.140432    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory
E0221 08:54:06.301581    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory
E0221 08:54:06.621704    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory
E0221 08:54:07.262034    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory
E0221 08:54:08.542192    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory
E0221 08:54:11.102802    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory
E0221 08:54:16.223627    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory
E0221 08:54:26.464223    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory
version_upgrade_test.go:205: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-20220221085315-6550 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (26.398707461s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (71.04s)

                                                
                                    
x
+
TestPause/serial/Pause (1.21s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:111: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20220221085158-6550 --alsologtostderr -v=5
pause_test.go:111: (dbg) Done: out/minikube-linux-amd64 pause -p pause-20220221085158-6550 --alsologtostderr -v=5: (1.213350307s)
--- PASS: TestPause/serial/Pause (1.21s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:77: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20220221085158-6550 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20220221085158-6550 --output=json --layout=cluster: exit status 2 (396.48917ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20220221085158-6550","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.25.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220221085158-6550","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.40s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.96s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:122: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20220221085158-6550 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.96s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.99s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:111: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20220221085158-6550 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.99s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.01s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:133: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20220221085158-6550 --alsologtostderr -v=5
E0221 08:53:29.174477    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory

                                                
                                                
=== CONT  TestPause/serial/DeletePaused
pause_test.go:133: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20220221085158-6550 --alsologtostderr -v=5: (3.011218223s)
--- PASS: TestPause/serial/DeletePaused (3.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (496.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20220221084933-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p auto-20220221084933-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=docker: (8m16.112122028s)
--- PASS: TestNetworkPlugins/group/auto/Start (496.11s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.88s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:169: (dbg) Run:  docker ps -a
pause_test.go:174: (dbg) Run:  docker volume inspect pause-20220221085158-6550
pause_test.go:174: (dbg) Non-zero exit: docker volume inspect pause-20220221085158-6550: exit status 1 (39.876123ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20220221085158-6550

                                                
                                                
** /stderr **
pause_test.go:179: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (97.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20220221084934-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=docker
E0221 08:53:53.613729    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/functional-20220221083056-6550/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20220221084934-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=docker: (1m37.39847718s)
--- PASS: TestNetworkPlugins/group/cilium/Start (97.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-20220221085315-6550
version_upgrade_test.go:213: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-20220221085315-6550: (2.116740444s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:343: "cilium-sxnkv" [134d1d0b-c8f4-489d-8794-db615edfa31d] Running
net_test.go:107: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.017061078s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20220221084934-6550 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (12.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context cilium-20220221084934-6550 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-zgfg6" [e52d4934-5efb-4eb7-86bc-b5662d466b3c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-zgfg6" [e52d4934-5efb-4eb7-86bc-b5662d466b3c] Running
E0221 08:55:27.905555    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory
net_test.go:146: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 12.007096203s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (12.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:163: (dbg) Run:  kubectl --context cilium-20220221084934-6550 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:182: (dbg) Run:  kubectl --context cilium-20220221084934-6550 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:232: (dbg) Run:  kubectl --context cilium-20220221084934-6550 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (42.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p false-20220221084934-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker  --container-runtime=docker
E0221 08:55:56.193062    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p false-20220221084934-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker  --container-runtime=docker: (42.767454452s)
--- PASS: TestNetworkPlugins/group/false/Start (42.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-20220221084934-6550 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context false-20220221084934-6550 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-gl7hj" [ba6605ea-dfed-40ce-83bd-cbd1b3c35da1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-gl7hj" [ba6605ea-dfed-40ce-83bd-cbd1b3c35da1] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.006431582s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20220221084933-6550 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context auto-20220221084933-6550 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-v8bk5" [5544bafb-ba1b-44ac-aa68-7b9c71bd7d70] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-v8bk5" [5544bafb-ba1b-44ac-aa68-7b9c71bd7d70] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.006416423s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (48.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20220221084934-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20220221084934-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=docker: (48.670320568s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (48.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (294.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20220221084933-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20220221084933-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=docker: (4m54.694157846s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (294.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:343: "kindnet-b7vpv" [70703c09-41bc-4c02-9ccf-df45333fbc70] Running
net_test.go:107: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.014904478s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20220221084934-6550 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context kindnet-20220221084934-6550 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-lcmt9" [0fd0efca-25d3-42b8-b210-f9f1dd5821bd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
helpers_test.go:343: "netcat-668db85669-lcmt9" [0fd0efca-25d3-42b8-b210-f9f1dd5821bd] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.007918178s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (290.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20220221084933-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20220221084933-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=docker: (4m50.530470557s)
--- PASS: TestNetworkPlugins/group/bridge/Start (290.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (290.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-20220221084933-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-20220221084933-6550 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (4m50.281757661s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (290.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20220221084933-6550 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context enable-default-cni-20220221084933-6550 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-fm848" [813ad8bd-230d-4b32-81c6-ab2109b7e0a7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0221 09:08:29.174519    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory
helpers_test.go:343: "netcat-668db85669-fm848" [813ad8bd-230d-4b32-81c6-ab2109b7e0a7] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.006467697s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20220221084933-6550 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context bridge-20220221084933-6550 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-f2pzb" [01d71b96-fb12-4f85-808c-6495638c70c6] Pending
helpers_test.go:343: "netcat-668db85669-f2pzb" [01d71b96-fb12-4f85-808c-6495638c70c6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-f2pzb" [01d71b96-fb12-4f85-808c-6495638c70c6] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.006812868s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (129.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220221090948-6550 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20220221090948-6550 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (2m9.314051164s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (129.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20220221090948-6550 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [11b2bdf2-6442-4808-bb1b-2dc867613d07] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
helpers_test.go:343: "busybox" [11b2bdf2-6442-4808-bb1b-2dc867613d07] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:181: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.011380433s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20220221090948-6550 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20220221090948-6550 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0221 09:12:06.548985    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context old-k8s-version-20220221090948-6550 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20220221090948-6550 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20220221090948-6550 --alsologtostderr -v=3: (10.969291333s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-20220221084933-6550 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context kubenet-20220221084933-6550 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-4md9w" [8cc65eb5-ef82-4281-90a6-859ab9f89010] Pending
helpers_test.go:343: "netcat-668db85669-4md9w" [8cc65eb5-ef82-4281-90a6-859ab9f89010] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
helpers_test.go:343: "netcat-668db85669-4md9w" [8cc65eb5-ef82-4281-90a6-859ab9f89010] Running
E0221 09:12:27.029704    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/auto-20220221084933-6550/client.crt: no such file or directory
net_test.go:146: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.006043367s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220221090948-6550 -n old-k8s-version-20220221090948-6550
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220221090948-6550 -n old-k8s-version-20220221090948-6550: exit status 7 (102.4671ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20220221090948-6550 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (410.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220221090948-6550 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20220221090948-6550 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (6m49.907378351s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220221090948-6550 -n old-k8s-version-20220221090948-6550
E0221 09:19:08.282737    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (410.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (54.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220221091339-6550 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5-rc.0
E0221 09:13:40.149613    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory
E0221 09:13:42.084608    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory
E0221 09:13:42.089864    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory
E0221 09:13:42.100177    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory
E0221 09:13:42.120449    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory
E0221 09:13:42.161253    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory
E0221 09:13:42.241533    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory
E0221 09:13:42.404962    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory
E0221 09:13:42.725440    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory
E0221 09:13:43.366371    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory
E0221 09:13:44.646534    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory
E0221 09:13:45.270147    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory
E0221 09:13:47.207107    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20220221091339-6550 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5-rc.0: (54.576127685s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (54.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20220221091339-6550 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [fb79d056-c563-4f84-955d-90a4971e5379] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [fb79d056-c563-4f84-955d-90a4971e5379] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:181: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.012222096s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20220221091339-6550 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20220221091339-6550 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context no-preload-20220221091339-6550 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20220221091339-6550 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20220221091339-6550 --alsologtostderr -v=3: (10.877192373s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (294.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220221091443-6550 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.4

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20220221091443-6550 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.4: (4m54.155259731s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (294.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220221091339-6550 -n no-preload-20220221091339-6550
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220221091339-6550 -n no-preload-20220221091339-6550: exit status 7 (114.199649ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20220221091339-6550 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (579.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220221091339-6550 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5-rc.0
E0221 09:14:56.952864    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20220221091339-6550 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5-rc.0: (9m38.630433421s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220221091339-6550 -n no-preload-20220221091339-6550
E0221 09:24:33.149211    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (579.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (291.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220221091844-6550 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.4
E0221 09:18:47.801627    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory
E0221 09:19:02.714240    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory
E0221 09:19:05.983449    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20220221091844-6550 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.4: (4m51.781673931s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (291.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-766959b846-sn6bn" [d47e90cd-1050-43a8-8b22-7bc1f011c864] Running
E0221 09:19:09.771128    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory
start_stop_delete_test.go:259: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014110081s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-766959b846-sn6bn" [d47e90cd-1050-43a8-8b22-7bc1f011c864] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005849472s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20220221090948-6550 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20220221090948-6550 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20220221090948-6550 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220221090948-6550 -n old-k8s-version-20220221090948-6550
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220221090948-6550 -n old-k8s-version-20220221090948-6550: exit status 2 (409.262476ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220221090948-6550 -n old-k8s-version-20220221090948-6550
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220221090948-6550 -n old-k8s-version-20220221090948-6550: exit status 2 (415.061798ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-20220221090948-6550 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220221090948-6550 -n old-k8s-version-20220221090948-6550
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220221090948-6550 -n old-k8s-version-20220221090948-6550
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (51.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220221091925-6550 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5-rc.0
E0221 09:19:33.148755    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/ingress-addon-legacy-20220221083319-6550/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220221091925-6550 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5-rc.0: (51.717775411s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (51.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20220221091443-6550 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [6145738e-6130-4b3d-a3fb-d7a1707425ef] Pending
helpers_test.go:343: "busybox" [6145738e-6130-4b3d-a3fb-d7a1707425ef] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [6145738e-6130-4b3d-a3fb-d7a1707425ef] Running
E0221 09:19:49.243615    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory
start_stop_delete_test.go:181: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 12.01279123s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20220221091443-6550 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20220221091443-6550 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context embed-certs-20220221091443-6550 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20220221091443-6550 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20220221091443-6550 --alsologtostderr -v=3: (12.182278691s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220221091443-6550 -n embed-certs-20220221091443-6550
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220221091443-6550 -n embed-certs-20220221091443-6550: exit status 7 (96.086056ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20220221091443-6550 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (573.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220221091443-6550 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.4
E0221 09:20:10.800543    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/cilium-20220221084934-6550/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20220221091443-6550 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.4: (9m33.260485478s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220221091443-6550 -n embed-certs-20220221091443-6550
E0221 09:29:36.705395    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/client.crt: no such file or directory
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (573.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20220221091925-6550 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:196: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20220221091925-6550 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20220221091925-6550 --alsologtostderr -v=3: (10.955357852s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220221091925-6550 -n newest-cni-20220221091925-6550
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220221091925-6550 -n newest-cni-20220221091925-6550: exit status 7 (99.293044ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20220221091925-6550 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (20.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220221091925-6550 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5-rc.0
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220221091925-6550 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5-rc.0: (19.737292261s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220221091925-6550 -n newest-cni-20220221091925-6550
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (20.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:258: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:269: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20220221091925-6550 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20220221091925-6550 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220221091925-6550 -n newest-cni-20220221091925-6550
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220221091925-6550 -n newest-cni-20220221091925-6550: exit status 2 (399.790605ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220221091925-6550 -n newest-cni-20220221091925-6550
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220221091925-6550 -n newest-cni-20220221091925-6550: exit status 2 (413.864586ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-20220221091925-6550 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220221091925-6550 -n newest-cni-20220221091925-6550
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220221091925-6550 -n newest-cni-20220221091925-6550
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (10.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20220221091844-6550 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [c30f7999-494b-4682-a5ec-42c5c6cf1a20] Pending
helpers_test.go:343: "busybox" [c30f7999-494b-4682-a5ec-42c5c6cf1a20] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0221 09:23:38.140664    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kubenet-20220221084933-6550/client.crt: no such file or directory
helpers_test.go:343: "busybox" [c30f7999-494b-4682-a5ec-42c5c6cf1a20] Running
E0221 09:23:42.084580    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory
start_stop_delete_test.go:181: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 10.010089357s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20220221091844-6550 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (10.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20220221091844-6550 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context default-k8s-different-port-20220221091844-6550 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (10.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20220221091844-6550 --alsologtostderr -v=3
E0221 09:23:55.004562    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/enable-default-cni-20220221084933-6550/client.crt: no such file or directory
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20220221091844-6550 --alsologtostderr -v=3: (10.72988837s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (10.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220221091844-6550 -n default-k8s-different-port-20220221091844-6550
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220221091844-6550 -n default-k8s-different-port-20220221091844-6550: exit status 7 (96.393159ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20220221091844-6550 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (571.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220221091844-6550 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.4
E0221 09:24:05.984165    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/skaffold-20220221084806-6550/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20220221091844-6550 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.4: (9m30.845211501s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220221091844-6550 -n default-k8s-different-port-20220221091844-6550
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (571.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-vjq5t" [83e2ef81-6567-4673-87b4-cae081554b67] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:259: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011498501s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-vjq5t" [83e2ef81-6567-4673-87b4-cae081554b67] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0221 09:24:41.880136    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/old-k8s-version-20220221090948-6550/client.crt: no such file or directory
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005640437s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context no-preload-20220221091339-6550 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-20220221091339-6550 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-20220221091339-6550 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220221091339-6550 -n no-preload-20220221091339-6550
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220221091339-6550 -n no-preload-20220221091339-6550: exit status 2 (393.86854ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220221091339-6550 -n no-preload-20220221091339-6550
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220221091339-6550 -n no-preload-20220221091339-6550: exit status 2 (399.924261ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-20220221091339-6550 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220221091339-6550 -n no-preload-20220221091339-6550
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220221091339-6550 -n no-preload-20220221091339-6550
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-66fws" [37f40f16-716a-485a-aadc-e72bc75bcda5] Running
E0221 09:29:39.265847    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/client.crt: no such file or directory
start_stop_delete_test.go:259: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01169684s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-66fws" [37f40f16-716a-485a-aadc-e72bc75bcda5] Running
E0221 09:29:44.387040    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/no-preload-20220221091339-6550/client.crt: no such file or directory
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-66fws" [37f40f16-716a-485a-aadc-e72bc75bcda5] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005766364s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context embed-certs-20220221091443-6550 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20220221091443-6550 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20220221091443-6550 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220221091443-6550 -n embed-certs-20220221091443-6550
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220221091443-6550 -n embed-certs-20220221091443-6550: exit status 2 (389.995101ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220221091443-6550 -n embed-certs-20220221091443-6550
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220221091443-6550 -n embed-certs-20220221091443-6550: exit status 2 (389.462879ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-20220221091443-6550 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220221091443-6550 -n embed-certs-20220221091443-6550
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220221091443-6550 -n embed-certs-20220221091443-6550
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-bpm6m" [fdee26cb-12aa-49b9-a8f0-78614c3d4bf4] Running
E0221 09:33:29.174003    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/addons-20220221082609-6550/client.crt: no such file or directory
start_stop_delete_test.go:259: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011518525s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-bpm6m" [fdee26cb-12aa-49b9-a8f0-78614c3d4bf4] Running
E0221 09:33:35.030146    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/kindnet-20220221084934-6550/client.crt: no such file or directory
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-bpm6m" [fdee26cb-12aa-49b9-a8f0-78614c3d4bf4] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006100515s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20220221091844-6550 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-different-port-20220221091844-6550 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (2.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-different-port-20220221091844-6550 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220221091844-6550 -n default-k8s-different-port-20220221091844-6550
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220221091844-6550 -n default-k8s-different-port-20220221091844-6550: exit status 2 (383.550482ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220221091844-6550 -n default-k8s-different-port-20220221091844-6550
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220221091844-6550 -n default-k8s-different-port-20220221091844-6550: exit status 2 (384.278733ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-different-port-20220221091844-6550 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220221091844-6550 -n default-k8s-different-port-20220221091844-6550
E0221 09:33:42.084310    6550 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13641-3300-2a71df5eb5ec0ca8243173c97a5614cea8fb2e82/.minikube/profiles/bridge-20220221084933-6550/client.crt: no such file or directory
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220221091844-6550 -n default-k8s-different-port-20220221091844-6550
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (2.95s)

                                                
                                    

Test skip (20/279)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:123: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:142: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:158: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.4/cached-images
aaa_download_only_test.go:123: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.4/binaries
aaa_download_only_test.go:142: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.4/kubectl
aaa_download_only_test.go:158: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.23.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5-rc.0/preload-exists (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5-rc.0/preload-exists
aaa_download_only_test.go:113: No preload image
--- SKIP: TestDownloadOnly/v1.23.5-rc.0/preload-exists (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5-rc.0/kubectl
aaa_download_only_test.go:158: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.23.5-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:449: Skipping Olm addon till images are fixed
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:187: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:35: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:39: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:43: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:77: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:176: Cleaning up "flannel-20220221084933-6550" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p flannel-20220221084933-6550
--- SKIP: TestNetworkPlugins/group/flannel (0.24s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-20220221091843-6550" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20220221091843-6550
--- SKIP: TestStartStop/group/disable-driver-mounts (0.49s)

                                                
                                    
Copied to clipboard