Test Report: Docker_Linux 13812

                    
                      afb3956fdbde357e4baa0f8617bfd5a64bad6558:2022-04-12:23465
                    
                

Test fail (6/285)

Order failed test Duration
262 TestNetworkPlugins/group/calico/Start 515.97
273 TestNetworkPlugins/group/custom-weave/Start 524.87
285 TestNetworkPlugins/group/kindnet/DNS 352.43
288 TestNetworkPlugins/group/enable-default-cni/DNS 323.44
295 TestNetworkPlugins/group/bridge/DNS 296.92
300 TestNetworkPlugins/group/kubenet/DNS 367.91
x
+
TestNetworkPlugins/group/calico/Start (515.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20220412193701-177186 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-20220412193701-177186 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=docker: exit status 80 (8m35.941762066s)

                                                
                                                
-- stdout --
	* [calico-20220412193701-177186] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13812
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Using Docker driver with the root privilege
	* Starting control plane node calico-20220412193701-177186 in cluster calico-20220412193701-177186
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.5 on Docker 20.10.13 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0412 19:41:06.755373  407540 out.go:297] Setting OutFile to fd 1 ...
	I0412 19:41:06.755483  407540 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 19:41:06.755494  407540 out.go:310] Setting ErrFile to fd 2...
	I0412 19:41:06.755499  407540 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 19:41:06.755599  407540 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0412 19:41:06.755855  407540 out.go:304] Setting JSON to false
	I0412 19:41:06.757678  407540 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":8620,"bootTime":1649783847,"procs":1100,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0412 19:41:06.757738  407540 start.go:125] virtualization: kvm guest
	I0412 19:41:06.760216  407540 out.go:176] * [calico-20220412193701-177186] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0412 19:41:06.761729  407540 out.go:176]   - MINIKUBE_LOCATION=13812
	I0412 19:41:06.760383  407540 notify.go:193] Checking for updates...
	I0412 19:41:06.763250  407540 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0412 19:41:06.764667  407540 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 19:41:06.766014  407540 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0412 19:41:06.767432  407540 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0412 19:41:06.767866  407540 config.go:178] Loaded profile config "auto-20220412193701-177186": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0412 19:41:06.767956  407540 config.go:178] Loaded profile config "cilium-20220412193701-177186": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0412 19:41:06.768022  407540 config.go:178] Loaded profile config "false-20220412193701-177186": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0412 19:41:06.768070  407540 driver.go:346] Setting default libvirt URI to qemu:///system
	I0412 19:41:06.814305  407540 docker.go:137] docker version: linux-20.10.14
	I0412 19:41:06.814414  407540 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 19:41:06.916092  407540 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:100 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:57 SystemTime:2022-04-12 19:41:06.844533019 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSe
rverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 19:41:06.916184  407540 docker.go:254] overlay module found
	I0412 19:41:06.918333  407540 out.go:176] * Using the docker driver based on user configuration
	I0412 19:41:06.918361  407540 start.go:284] selected driver: docker
	I0412 19:41:06.918369  407540 start.go:801] validating driver "docker" against <nil>
	I0412 19:41:06.918393  407540 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0412 19:41:06.918448  407540 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 19:41:06.918474  407540 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0412 19:41:06.919963  407540 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 19:41:06.920740  407540 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 19:41:07.032577  407540 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:100 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:57 SystemTime:2022-04-12 19:41:06.953113992 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSe
rverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 19:41:07.032693  407540 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0412 19:41:07.032904  407540 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0412 19:41:07.035099  407540 out.go:176] * Using Docker driver with the root privilege
	I0412 19:41:07.035128  407540 cni.go:93] Creating CNI manager for "calico"
	I0412 19:41:07.035140  407540 start_flags.go:301] Found "Calico" CNI - setting NetworkPlugin=cni
	I0412 19:41:07.035155  407540 start_flags.go:306] config:
	{Name:calico-20220412193701-177186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:calico-20220412193701-177186 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 19:41:07.037089  407540 out.go:176] * Starting control plane node calico-20220412193701-177186 in cluster calico-20220412193701-177186
	I0412 19:41:07.037143  407540 cache.go:120] Beginning downloading kic base image for docker with docker
	I0412 19:41:07.038803  407540 out.go:176] * Pulling base image ...
	I0412 19:41:07.038849  407540 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0412 19:41:07.038887  407540 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0412 19:41:07.038904  407540 cache.go:57] Caching tarball of preloaded images
	I0412 19:41:07.038939  407540 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0412 19:41:07.039148  407540 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0412 19:41:07.039169  407540 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0412 19:41:07.039306  407540 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412193701-177186/config.json ...
	I0412 19:41:07.039341  407540 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412193701-177186/config.json: {Name:mk7f26d2e4187716e5056a0b59ea5ce2cf408246 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:41:07.106602  407540 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0412 19:41:07.106636  407540 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0412 19:41:07.106659  407540 cache.go:206] Successfully downloaded all kic artifacts
	I0412 19:41:07.106707  407540 start.go:352] acquiring machines lock for calico-20220412193701-177186: {Name:mk17276463d9127a299981babd79b7c98044a6bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 19:41:07.106861  407540 start.go:356] acquired machines lock for "calico-20220412193701-177186" in 129.697µs
	I0412 19:41:07.106898  407540 start.go:91] Provisioning new machine with config: &{Name:calico-20220412193701-177186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:calico-20220412193701-177186 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0412 19:41:07.107034  407540 start.go:131] createHost starting for "" (driver="docker")
	I0412 19:41:07.109780  407540 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0412 19:41:07.110115  407540 start.go:165] libmachine.API.Create for "calico-20220412193701-177186" (driver="docker")
	I0412 19:41:07.110159  407540 client.go:168] LocalClient.Create starting
	I0412 19:41:07.110248  407540 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem
	I0412 19:41:07.110290  407540 main.go:134] libmachine: Decoding PEM data...
	I0412 19:41:07.110315  407540 main.go:134] libmachine: Parsing certificate...
	I0412 19:41:07.110387  407540 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem
	I0412 19:41:07.110413  407540 main.go:134] libmachine: Decoding PEM data...
	I0412 19:41:07.110432  407540 main.go:134] libmachine: Parsing certificate...
	I0412 19:41:07.110848  407540 cli_runner.go:164] Run: docker network inspect calico-20220412193701-177186 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0412 19:41:07.163839  407540 cli_runner.go:211] docker network inspect calico-20220412193701-177186 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0412 19:41:07.163898  407540 network_create.go:272] running [docker network inspect calico-20220412193701-177186] to gather additional debugging logs...
	I0412 19:41:07.163921  407540 cli_runner.go:164] Run: docker network inspect calico-20220412193701-177186
	W0412 19:41:07.209762  407540 cli_runner.go:211] docker network inspect calico-20220412193701-177186 returned with exit code 1
	I0412 19:41:07.209809  407540 network_create.go:275] error running [docker network inspect calico-20220412193701-177186]: docker network inspect calico-20220412193701-177186: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220412193701-177186
	I0412 19:41:07.209831  407540 network_create.go:277] output of [docker network inspect calico-20220412193701-177186]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220412193701-177186
	
	** /stderr **
	I0412 19:41:07.209886  407540 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0412 19:41:07.248915  407540 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-94f575456388 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:72:a5:e4:41}}
	I0412 19:41:07.249698  407540 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-f7ae229137ee IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:38:3d:31:78}}
	I0412 19:41:07.250143  407540 network.go:240] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-b2f530c36ad4 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:82:e9:cf:44}}
	I0412 19:41:07.250659  407540 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.76.0:0xc00078e2f0] misses:0}
	I0412 19:41:07.250692  407540 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0412 19:41:07.250705  407540 network_create.go:115] attempt to create docker network calico-20220412193701-177186 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0412 19:41:07.250776  407540 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220412193701-177186
	I0412 19:41:07.325972  407540 network_create.go:99] docker network calico-20220412193701-177186 192.168.76.0/24 created
	I0412 19:41:07.326014  407540 kic.go:106] calculated static IP "192.168.76.2" for the "calico-20220412193701-177186" container
	I0412 19:41:07.326073  407540 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0412 19:41:07.368464  407540 cli_runner.go:164] Run: docker volume create calico-20220412193701-177186 --label name.minikube.sigs.k8s.io=calico-20220412193701-177186 --label created_by.minikube.sigs.k8s.io=true
	I0412 19:41:07.411685  407540 oci.go:103] Successfully created a docker volume calico-20220412193701-177186
	I0412 19:41:07.411806  407540 cli_runner.go:164] Run: docker run --rm --name calico-20220412193701-177186-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220412193701-177186 --entrypoint /usr/bin/test -v calico-20220412193701-177186:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0412 19:41:08.014174  407540 oci.go:107] Successfully prepared a docker volume calico-20220412193701-177186
	I0412 19:41:08.014212  407540 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0412 19:41:08.014230  407540 kic.go:179] Starting extracting preloaded images to volume ...
	I0412 19:41:08.014321  407540 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220412193701-177186:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0412 19:41:13.595788  407540 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220412193701-177186:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (5.581388851s)
	I0412 19:41:13.595831  407540 kic.go:188] duration metric: took 5.581595 seconds to extract preloaded images to volume
	W0412 19:41:13.595878  407540 oci.go:136] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0412 19:41:13.595890  407540 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0412 19:41:13.595942  407540 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0412 19:41:13.723462  407540 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220412193701-177186 --name calico-20220412193701-177186 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220412193701-177186 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220412193701-177186 --network calico-20220412193701-177186 --ip 192.168.76.2 --volume calico-20220412193701-177186:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
	I0412 19:41:14.250245  407540 cli_runner.go:164] Run: docker container inspect calico-20220412193701-177186 --format={{.State.Running}}
	I0412 19:41:14.288805  407540 cli_runner.go:164] Run: docker container inspect calico-20220412193701-177186 --format={{.State.Status}}
	I0412 19:41:14.322642  407540 cli_runner.go:164] Run: docker exec calico-20220412193701-177186 stat /var/lib/dpkg/alternatives/iptables
	I0412 19:41:14.425207  407540 oci.go:279] the created container "calico-20220412193701-177186" has a running status.
	I0412 19:41:14.425243  407540 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/calico-20220412193701-177186/id_rsa...
	I0412 19:41:14.617641  407540 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/calico-20220412193701-177186/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0412 19:41:14.780441  407540 cli_runner.go:164] Run: docker container inspect calico-20220412193701-177186 --format={{.State.Status}}
	I0412 19:41:14.843598  407540 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0412 19:41:14.843631  407540 kic_runner.go:114] Args: [docker exec --privileged calico-20220412193701-177186 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0412 19:41:14.935066  407540 cli_runner.go:164] Run: docker container inspect calico-20220412193701-177186 --format={{.State.Status}}
	I0412 19:41:14.977320  407540 machine.go:88] provisioning docker machine ...
	I0412 19:41:14.977361  407540 ubuntu.go:169] provisioning hostname "calico-20220412193701-177186"
	I0412 19:41:14.977425  407540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220412193701-177186
	I0412 19:41:15.018641  407540 main.go:134] libmachine: Using SSH client type: native
	I0412 19:41:15.018880  407540 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49379 <nil> <nil>}
	I0412 19:41:15.018908  407540 main.go:134] libmachine: About to run SSH command:
	sudo hostname calico-20220412193701-177186 && echo "calico-20220412193701-177186" | sudo tee /etc/hostname
	I0412 19:41:15.153752  407540 main.go:134] libmachine: SSH cmd err, output: <nil>: calico-20220412193701-177186
	
	I0412 19:41:15.153825  407540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220412193701-177186
	I0412 19:41:15.193162  407540 main.go:134] libmachine: Using SSH client type: native
	I0412 19:41:15.193360  407540 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49379 <nil> <nil>}
	I0412 19:41:15.193392  407540 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20220412193701-177186' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20220412193701-177186/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20220412193701-177186' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0412 19:41:15.316621  407540 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0412 19:41:15.316657  407540 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem
ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0412 19:41:15.316686  407540 ubuntu.go:177] setting up certificates
	I0412 19:41:15.316710  407540 provision.go:83] configureAuth start
	I0412 19:41:15.316773  407540 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220412193701-177186
	I0412 19:41:15.351911  407540 provision.go:138] copyHostCerts
	I0412 19:41:15.351986  407540 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0412 19:41:15.352001  407540 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0412 19:41:15.352069  407540 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0412 19:41:15.352159  407540 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0412 19:41:15.352172  407540 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0412 19:41:15.352207  407540 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0412 19:41:15.352264  407540 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0412 19:41:15.352275  407540 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0412 19:41:15.352301  407540 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1679 bytes)
	I0412 19:41:15.352362  407540 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.calico-20220412193701-177186 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20220412193701-177186]
	I0412 19:41:15.431727  407540 provision.go:172] copyRemoteCerts
	I0412 19:41:15.431800  407540 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0412 19:41:15.431847  407540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220412193701-177186
	I0412 19:41:15.471059  407540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49379 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/calico-20220412193701-177186/id_rsa Username:docker}
	I0412 19:41:15.562087  407540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0412 19:41:15.584176  407540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0412 19:41:15.606238  407540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0412 19:41:15.636777  407540 provision.go:86] duration metric: configureAuth took 320.036237ms
	I0412 19:41:15.636807  407540 ubuntu.go:193] setting minikube options for container-runtime
	I0412 19:41:15.637029  407540 config.go:178] Loaded profile config "calico-20220412193701-177186": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0412 19:41:15.637091  407540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220412193701-177186
	I0412 19:41:15.683357  407540 main.go:134] libmachine: Using SSH client type: native
	I0412 19:41:15.683528  407540 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49379 <nil> <nil>}
	I0412 19:41:15.683552  407540 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0412 19:41:15.814661  407540 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0412 19:41:15.814685  407540 ubuntu.go:71] root file system type: overlay
	I0412 19:41:15.814908  407540 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0412 19:41:15.814970  407540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220412193701-177186
	I0412 19:41:15.863279  407540 main.go:134] libmachine: Using SSH client type: native
	I0412 19:41:15.863467  407540 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49379 <nil> <nil>}
	I0412 19:41:15.863571  407540 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0412 19:41:16.009448  407540 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0412 19:41:16.009542  407540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220412193701-177186
	I0412 19:41:16.048870  407540 main.go:134] libmachine: Using SSH client type: native
	I0412 19:41:16.049060  407540 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49379 <nil> <nil>}
	I0412 19:41:16.049094  407540 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0412 19:41:17.195354  407540 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-03-10 14:05:44.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-04-12 19:41:16.004644720 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0412 19:41:17.195388  407540 machine.go:91] provisioned docker machine in 2.218041734s
	I0412 19:41:17.195400  407540 client.go:171] LocalClient.Create took 10.085231064s
	I0412 19:41:17.195413  407540 start.go:173] duration metric: libmachine.API.Create for "calico-20220412193701-177186" took 10.085300452s
	I0412 19:41:17.195429  407540 start.go:306] post-start starting for "calico-20220412193701-177186" (driver="docker")
	I0412 19:41:17.195436  407540 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0412 19:41:17.195497  407540 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0412 19:41:17.195538  407540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220412193701-177186
	I0412 19:41:17.235185  407540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49379 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/calico-20220412193701-177186/id_rsa Username:docker}
	I0412 19:41:17.327879  407540 ssh_runner.go:195] Run: cat /etc/os-release
	I0412 19:41:17.330571  407540 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0412 19:41:17.330602  407540 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0412 19:41:17.330614  407540 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0412 19:41:17.330621  407540 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0412 19:41:17.330629  407540 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0412 19:41:17.330677  407540 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0412 19:41:17.330735  407540 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/1771862.pem -> 1771862.pem in /etc/ssl/certs
	I0412 19:41:17.330821  407540 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0412 19:41:17.337608  407540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/1771862.pem --> /etc/ssl/certs/1771862.pem (1708 bytes)
	I0412 19:41:17.355535  407540 start.go:309] post-start completed in 160.091075ms
	I0412 19:41:17.355869  407540 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220412193701-177186
	I0412 19:41:17.390341  407540 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412193701-177186/config.json ...
	I0412 19:41:17.390584  407540 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0412 19:41:17.390633  407540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220412193701-177186
	I0412 19:41:17.430287  407540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49379 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/calico-20220412193701-177186/id_rsa Username:docker}
	I0412 19:41:17.513717  407540 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0412 19:41:17.517796  407540 start.go:134] duration metric: createHost completed in 10.410744121s
	I0412 19:41:17.517822  407540 start.go:81] releasing machines lock for "calico-20220412193701-177186", held for 10.410943212s
	I0412 19:41:17.517900  407540 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220412193701-177186
	I0412 19:41:17.550736  407540 ssh_runner.go:195] Run: systemctl --version
	I0412 19:41:17.550789  407540 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0412 19:41:17.550793  407540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220412193701-177186
	I0412 19:41:17.550845  407540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220412193701-177186
	I0412 19:41:17.587998  407540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49379 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/calico-20220412193701-177186/id_rsa Username:docker}
	I0412 19:41:17.589755  407540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49379 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/calico-20220412193701-177186/id_rsa Username:docker}
	I0412 19:41:17.692334  407540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0412 19:41:17.702251  407540 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0412 19:41:17.711175  407540 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0412 19:41:17.711220  407540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0412 19:41:17.720489  407540 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0412 19:41:17.733352  407540 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0412 19:41:17.813321  407540 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0412 19:41:17.884395  407540 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0412 19:41:17.896238  407540 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0412 19:41:17.984355  407540 ssh_runner.go:195] Run: sudo systemctl start docker
	I0412 19:41:17.995519  407540 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0412 19:41:18.034442  407540 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0412 19:41:18.081153  407540 out.go:203] * Preparing Kubernetes v1.23.5 on Docker 20.10.13 ...
	I0412 19:41:18.081226  407540 cli_runner.go:164] Run: docker network inspect calico-20220412193701-177186 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0412 19:41:18.123916  407540 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0412 19:41:18.129509  407540 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 19:41:18.141311  407540 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0412 19:41:18.141394  407540 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0412 19:41:18.177276  407540 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0412 19:41:18.177308  407540 docker.go:537] Images already preloaded, skipping extraction
	I0412 19:41:18.177364  407540 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0412 19:41:18.210595  407540 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0412 19:41:18.210625  407540 cache_images.go:84] Images are preloaded, skipping loading
	I0412 19:41:18.210677  407540 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0412 19:41:18.295317  407540 cni.go:93] Creating CNI manager for "calico"
	I0412 19:41:18.295341  407540 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0412 19:41:18.295353  407540 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20220412193701-177186 NodeName:calico-20220412193701-177186 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/li
b/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0412 19:41:18.295481  407540 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "calico-20220412193701-177186"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0412 19:41:18.295552  407540 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=calico-20220412193701-177186 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:calico-20220412193701-177186 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0412 19:41:18.295604  407540 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0412 19:41:18.302831  407540 binaries.go:44] Found k8s binaries, skipping transfer
	I0412 19:41:18.302897  407540 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0412 19:41:18.309490  407540 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0412 19:41:18.322265  407540 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0412 19:41:18.334588  407540 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2050 bytes)
	I0412 19:41:18.346753  407540 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0412 19:41:18.349674  407540 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 19:41:18.358643  407540 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412193701-177186 for IP: 192.168.76.2
	I0412 19:41:18.358737  407540 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0412 19:41:18.358775  407540 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0412 19:41:18.358824  407540 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412193701-177186/client.key
	I0412 19:41:18.358838  407540 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412193701-177186/client.crt with IP's: []
	I0412 19:41:18.439494  407540 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412193701-177186/client.crt ...
	I0412 19:41:18.439526  407540 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412193701-177186/client.crt: {Name:mk9fa937fe0533faa520749b723c05fb3bca947b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:41:18.439721  407540 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412193701-177186/client.key ...
	I0412 19:41:18.439738  407540 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412193701-177186/client.key: {Name:mk6a4531695d38a9a9bb660df4e89d8f78ddf688 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:41:18.439860  407540 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412193701-177186/apiserver.key.31bdca25
	I0412 19:41:18.439878  407540 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412193701-177186/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0412 19:41:18.561421  407540 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412193701-177186/apiserver.crt.31bdca25 ...
	I0412 19:41:18.561466  407540 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412193701-177186/apiserver.crt.31bdca25: {Name:mk2989676d3c98da711ea0d8e642a5b7ffd9f324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:41:18.561681  407540 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412193701-177186/apiserver.key.31bdca25 ...
	I0412 19:41:18.561701  407540 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412193701-177186/apiserver.key.31bdca25: {Name:mk14abfd62f0cc05a9bb672cfa74c6bc854134ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:41:18.561841  407540 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412193701-177186/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412193701-177186/apiserver.crt
	I0412 19:41:18.561918  407540 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412193701-177186/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412193701-177186/apiserver.key
	I0412 19:41:18.561987  407540 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412193701-177186/proxy-client.key
	I0412 19:41:18.562002  407540 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412193701-177186/proxy-client.crt with IP's: []
	I0412 19:41:18.686676  407540 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412193701-177186/proxy-client.crt ...
	I0412 19:41:18.686709  407540 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412193701-177186/proxy-client.crt: {Name:mk0ebb4dedfdde79bfb0c766c2f85d70dbe68367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:41:18.686889  407540 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412193701-177186/proxy-client.key ...
	I0412 19:41:18.686906  407540 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412193701-177186/proxy-client.key: {Name:mkc36d178c870c0f689d2e40ea8e8da7798c52ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:41:18.687102  407540 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/177186.pem (1338 bytes)
	W0412 19:41:18.687166  407540 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/177186_empty.pem, impossibly tiny 0 bytes
	I0412 19:41:18.687190  407540 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1679 bytes)
	I0412 19:41:18.687227  407540 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0412 19:41:18.687264  407540 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0412 19:41:18.687298  407540 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1679 bytes)
	I0412 19:41:18.687359  407540 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/1771862.pem (1708 bytes)
	I0412 19:41:18.687941  407540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412193701-177186/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0412 19:41:18.706403  407540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412193701-177186/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0412 19:41:18.725084  407540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412193701-177186/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0412 19:41:18.741631  407540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412193701-177186/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0412 19:41:18.761148  407540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0412 19:41:18.777873  407540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0412 19:41:18.794390  407540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0412 19:41:18.811704  407540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0412 19:41:18.829434  407540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/1771862.pem --> /usr/share/ca-certificates/1771862.pem (1708 bytes)
	I0412 19:41:18.845894  407540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0412 19:41:18.862270  407540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/177186.pem --> /usr/share/ca-certificates/177186.pem (1338 bytes)
	I0412 19:41:18.877975  407540 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0412 19:41:18.890064  407540 ssh_runner.go:195] Run: openssl version
	I0412 19:41:18.894695  407540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0412 19:41:18.902485  407540 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0412 19:41:18.906277  407540 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Apr 12 19:13 /usr/share/ca-certificates/minikubeCA.pem
	I0412 19:41:18.906330  407540 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0412 19:41:18.911319  407540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0412 19:41:18.918674  407540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177186.pem && ln -fs /usr/share/ca-certificates/177186.pem /etc/ssl/certs/177186.pem"
	I0412 19:41:18.926551  407540 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177186.pem
	I0412 19:41:18.929486  407540 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Apr 12 19:16 /usr/share/ca-certificates/177186.pem
	I0412 19:41:18.929521  407540 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177186.pem
	I0412 19:41:18.934502  407540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/177186.pem /etc/ssl/certs/51391683.0"
	I0412 19:41:18.941619  407540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1771862.pem && ln -fs /usr/share/ca-certificates/1771862.pem /etc/ssl/certs/1771862.pem"
	I0412 19:41:18.948284  407540 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1771862.pem
	I0412 19:41:18.951120  407540 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Apr 12 19:16 /usr/share/ca-certificates/1771862.pem
	I0412 19:41:18.951159  407540 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1771862.pem
	I0412 19:41:18.955773  407540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1771862.pem /etc/ssl/certs/3ec20f2e.0"
	I0412 19:41:18.962590  407540 kubeadm.go:391] StartCluster: {Name:calico-20220412193701-177186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:calico-20220412193701-177186 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMe
trics:false}
	I0412 19:41:18.962707  407540 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0412 19:41:18.993331  407540 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0412 19:41:19.000423  407540 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 19:41:19.007301  407540 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0412 19:41:19.007346  407540 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 19:41:19.014042  407540 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0412 19:41:19.014096  407540 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0412 19:41:29.321857  407540 out.go:203]   - Generating certificates and keys ...
	I0412 19:41:29.324967  407540 out.go:203]   - Booting up control plane ...
	I0412 19:41:29.328036  407540 out.go:203]   - Configuring RBAC rules ...
	I0412 19:41:29.329858  407540 cni.go:93] Creating CNI manager for "calico"
	I0412 19:41:29.331428  407540 out.go:176] * Configuring Calico (Container Networking Interface) ...
	I0412 19:41:29.331628  407540 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0412 19:41:29.331642  407540 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202049 bytes)
	I0412 19:41:29.377128  407540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 19:41:31.043401  407540 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.666231962s)
	I0412 19:41:31.043448  407540 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0412 19:41:31.043543  407540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:41:31.043564  407540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f minikube.k8s.io/name=calico-20220412193701-177186 minikube.k8s.io/updated_at=2022_04_12T19_41_31_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:41:31.142720  407540 ops.go:34] apiserver oom_adj: -16
	I0412 19:41:31.142722  407540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:41:31.741528  407540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:41:32.241120  407540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:41:32.741079  407540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:41:33.241821  407540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:41:33.741105  407540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:41:34.241855  407540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:41:34.741119  407540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:41:35.241158  407540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:41:35.741843  407540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:41:36.241847  407540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:41:36.741895  407540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:41:37.241583  407540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:41:37.741124  407540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:41:38.241116  407540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:41:38.741427  407540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:41:39.241788  407540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:41:39.741210  407540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:41:40.241512  407540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:41:40.741500  407540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:41:41.241736  407540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:41:41.741172  407540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:41:41.892114  407540 kubeadm.go:1020] duration metric: took 10.848626676s to wait for elevateKubeSystemPrivileges.
	I0412 19:41:41.892147  407540 kubeadm.go:393] StartCluster complete in 22.929565551s
	I0412 19:41:41.892171  407540 settings.go:142] acquiring lock: {Name:mk2e99ebf61b9636eb8e70d244b6d08a5c7b2cff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:41:41.892268  407540 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 19:41:41.893722  407540 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk98070a3665a9ff78efa8315b027fd3f059f957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:41:42.421263  407540 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20220412193701-177186" rescaled to 1
	I0412 19:41:42.421335  407540 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0412 19:41:42.423229  407540 out.go:176] * Verifying Kubernetes components...
	I0412 19:41:42.421483  407540 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0412 19:41:42.423300  407540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 19:41:42.421690  407540 config.go:178] Loaded profile config "calico-20220412193701-177186": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0412 19:41:42.421709  407540 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0412 19:41:42.423500  407540 addons.go:65] Setting storage-provisioner=true in profile "calico-20220412193701-177186"
	I0412 19:41:42.423518  407540 addons.go:65] Setting default-storageclass=true in profile "calico-20220412193701-177186"
	I0412 19:41:42.423523  407540 addons.go:153] Setting addon storage-provisioner=true in "calico-20220412193701-177186"
	W0412 19:41:42.423532  407540 addons.go:165] addon storage-provisioner should already be in state true
	I0412 19:41:42.423538  407540 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20220412193701-177186"
	I0412 19:41:42.423577  407540 host.go:66] Checking if "calico-20220412193701-177186" exists ...
	I0412 19:41:42.423910  407540 cli_runner.go:164] Run: docker container inspect calico-20220412193701-177186 --format={{.State.Status}}
	I0412 19:41:42.424085  407540 cli_runner.go:164] Run: docker container inspect calico-20220412193701-177186 --format={{.State.Status}}
	I0412 19:41:42.489211  407540 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0412 19:41:42.489388  407540 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 19:41:42.489410  407540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0412 19:41:42.489475  407540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220412193701-177186
	I0412 19:41:42.500375  407540 addons.go:153] Setting addon default-storageclass=true in "calico-20220412193701-177186"
	W0412 19:41:42.500403  407540 addons.go:165] addon default-storageclass should already be in state true
	I0412 19:41:42.500432  407540 host.go:66] Checking if "calico-20220412193701-177186" exists ...
	I0412 19:41:42.500889  407540 cli_runner.go:164] Run: docker container inspect calico-20220412193701-177186 --format={{.State.Status}}
	I0412 19:41:42.549624  407540 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0412 19:41:42.550714  407540 node_ready.go:35] waiting up to 5m0s for node "calico-20220412193701-177186" to be "Ready" ...
	I0412 19:41:42.555312  407540 node_ready.go:49] node "calico-20220412193701-177186" has status "Ready":"True"
	I0412 19:41:42.555336  407540 node_ready.go:38] duration metric: took 4.590482ms waiting for node "calico-20220412193701-177186" to be "Ready" ...
	I0412 19:41:42.555350  407540 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0412 19:41:42.556093  407540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49379 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/calico-20220412193701-177186/id_rsa Username:docker}
	I0412 19:41:42.565194  407540 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0412 19:41:42.565225  407540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0412 19:41:42.565292  407540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220412193701-177186
	I0412 19:41:42.586811  407540 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace to be "Ready" ...
	I0412 19:41:42.631777  407540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49379 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/calico-20220412193701-177186/id_rsa Username:docker}
	I0412 19:41:42.801169  407540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 19:41:42.902096  407540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0412 19:41:44.399358  407540 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.849684277s)
	I0412 19:41:44.399402  407540 start.go:777] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0412 19:41:44.650251  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:41:45.657779  407540 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.856565405s)
	I0412 19:41:45.657798  407540 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.755653996s)
	I0412 19:41:45.808452  407540 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0412 19:41:45.808494  407540 addons.go:417] enableAddons completed in 3.38678642s
	I0412 19:41:47.098984  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:41:49.106507  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:41:51.600453  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:41:54.099038  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:41:56.100770  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:41:58.599368  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:00.607951  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:03.102155  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:05.599866  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:08.099169  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:10.100071  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:12.599281  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:15.099093  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:17.099660  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:19.598559  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:22.099690  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:24.598324  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:27.099339  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:29.099465  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:31.598542  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:34.099625  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:36.597967  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:38.598328  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:41.103808  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:43.256124  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:45.599132  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:48.099192  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:50.598253  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:52.600135  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:55.099282  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:57.099852  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:59.100877  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:01.598756  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:03.598935  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:06.098200  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:08.099558  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:10.598773  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:12.599427  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:15.098839  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:17.099028  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:19.099267  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:21.099465  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:23.099558  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:25.599758  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:28.099837  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:30.100013  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:32.600097  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:35.097905  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:37.599048  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:39.599330  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:42.100174  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:44.598677  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:47.099581  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:49.598288  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:51.598447  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:53.599121  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:56.098617  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:58.599792  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:01.100683  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:03.598665  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:06.098488  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:08.099003  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:10.099526  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:12.599772  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:14.611281  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:17.098390  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:19.598666  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:21.598932  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:24.098417  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:26.599978  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:29.099135  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:31.099281  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:33.099991  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:35.597886  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:38.098922  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:40.597994  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:42.599195  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:44.599476  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:47.099115  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:49.599218  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:52.098352  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:54.099352  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:56.101096  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:58.598522  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:01.098994  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:03.099666  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:05.598193  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:07.598568  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:09.599501  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:11.600170  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:14.098949  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:16.099145  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:18.599035  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:20.599401  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:23.098906  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:25.598653  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:28.097801  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:30.099261  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:32.100737  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:34.598517  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:36.599995  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:39.098502  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:41.598366  407540 pod_ready.go:102] pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:42.603775  407540 pod_ready.go:81] duration metric: took 4m0.016925733s waiting for pod "calico-kube-controllers-8594699699-5vp8b" in "kube-system" namespace to be "Ready" ...
	E0412 19:45:42.603801  407540 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0412 19:45:42.603810  407540 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-mmfrm" in "kube-system" namespace to be "Ready" ...
	I0412 19:45:44.616164  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:47.115751  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:49.117474  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:51.616925  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:54.117441  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:56.615796  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:58.617062  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:01.116898  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:03.616620  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:06.115971  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:08.116410  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:10.117334  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:12.616652  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:14.619253  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:17.115654  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:19.116523  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:21.615341  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:23.616850  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:26.116015  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:28.116195  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:30.116658  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:32.615600  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:34.616893  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:37.115195  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:39.117288  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:41.615505  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:44.116453  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:46.615838  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:49.116006  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:51.117043  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:53.615815  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:55.616445  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:58.115669  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:00.116438  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:02.615981  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:04.616579  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:07.118166  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:09.615102  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:11.616584  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:14.118058  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:16.616071  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:18.617081  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:21.115854  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:23.116713  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:25.616515  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:28.116709  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:30.116878  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:32.616610  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:35.116866  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:37.615853  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:40.116247  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:42.116298  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:44.116670  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:46.615632  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:49.117771  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:51.619204  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:54.115637  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:56.616494  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:58.617212  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:01.118744  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:03.616304  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:06.116737  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:08.615651  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:10.616318  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:12.617226  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:15.117100  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:17.615155  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:19.615580  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:21.616031  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:23.616265  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:25.616322  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:28.116202  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:30.116384  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:32.116442  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:34.615539  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:36.616581  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:39.116776  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:41.615374  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:43.615511  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:45.616036  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:48.116152  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:50.617235  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:53.115941  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:55.116201  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:57.117078  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:59.617343  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:02.117881  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:04.615517  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:06.616644  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:09.115581  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:11.615726  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:13.616007  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:16.116611  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:18.617280  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:21.117472  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:23.616573  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:26.116197  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:28.117059  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:30.117564  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:32.616880  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:35.116365  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:37.116795  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:39.616909  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:42.115564  407540 pod_ready.go:102] pod "calico-node-mmfrm" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:42.620767  407540 pod_ready.go:81] duration metric: took 4m0.016945098s waiting for pod "calico-node-mmfrm" in "kube-system" namespace to be "Ready" ...
	E0412 19:49:42.620795  407540 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0412 19:49:42.620811  407540 pod_ready.go:38] duration metric: took 8m0.065447294s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0412 19:49:42.623550  407540 out.go:176] 
	W0412 19:49:42.623711  407540 out.go:241] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0412 19:49:42.623722  407540 out.go:241] * 
	* 
	W0412 19:49:42.624628  407540 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0412 19:49:42.626405  407540 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:100: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (515.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (524.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p custom-weave-20220412193701-177186 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p custom-weave-20220412193701-177186 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=docker: exit status 105 (8m44.841992098s)

                                                
                                                
-- stdout --
	* [custom-weave-20220412193701-177186] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13812
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Using Docker driver with the root privilege
	* Starting control plane node custom-weave-20220412193701-177186 in cluster custom-weave-20220412193701-177186
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.5 on Docker 20.10.13 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring testdata/weavenet.yaml (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0412 19:41:31.539360  414625 out.go:297] Setting OutFile to fd 1 ...
	I0412 19:41:31.539447  414625 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 19:41:31.539455  414625 out.go:310] Setting ErrFile to fd 2...
	I0412 19:41:31.539460  414625 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 19:41:31.539551  414625 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0412 19:41:31.539811  414625 out.go:304] Setting JSON to false
	I0412 19:41:31.541571  414625 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":8645,"bootTime":1649783847,"procs":1053,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0412 19:41:31.541631  414625 start.go:125] virtualization: kvm guest
	I0412 19:41:31.543760  414625 out.go:176] * [custom-weave-20220412193701-177186] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0412 19:41:31.545136  414625 out.go:176]   - MINIKUBE_LOCATION=13812
	I0412 19:41:31.543931  414625 notify.go:193] Checking for updates...
	I0412 19:41:31.546558  414625 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0412 19:41:31.547995  414625 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 19:41:31.549262  414625 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0412 19:41:31.550522  414625 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0412 19:41:31.550954  414625 config.go:178] Loaded profile config "calico-20220412193701-177186": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0412 19:41:31.551053  414625 config.go:178] Loaded profile config "cilium-20220412193701-177186": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0412 19:41:31.551143  414625 config.go:178] Loaded profile config "false-20220412193701-177186": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0412 19:41:31.551192  414625 driver.go:346] Setting default libvirt URI to qemu:///system
	I0412 19:41:31.599068  414625 docker.go:137] docker version: linux-20.10.14
	I0412 19:41:31.599172  414625 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 19:41:31.699803  414625 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:100 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:57 SystemTime:2022-04-12 19:41:31.63534627 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 19:41:31.699955  414625 docker.go:254] overlay module found
	I0412 19:41:31.703079  414625 out.go:176] * Using the docker driver based on user configuration
	I0412 19:41:31.703106  414625 start.go:284] selected driver: docker
	I0412 19:41:31.703112  414625 start.go:801] validating driver "docker" against <nil>
	I0412 19:41:31.703145  414625 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0412 19:41:31.703201  414625 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 19:41:31.703229  414625 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0412 19:41:31.704797  414625 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 19:41:31.705590  414625 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 19:41:31.812247  414625 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:100 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:57 SystemTime:2022-04-12 19:41:31.739162657 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSe
rverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 19:41:31.812453  414625 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0412 19:41:31.812636  414625 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0412 19:41:31.814613  414625 out.go:176] * Using Docker driver with the root privilege
	I0412 19:41:31.814636  414625 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0412 19:41:31.814652  414625 start_flags.go:301] Found "testdata/weavenet.yaml" CNI - setting NetworkPlugin=cni
	I0412 19:41:31.814667  414625 start_flags.go:306] config:
	{Name:custom-weave-20220412193701-177186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:custom-weave-20220412193701-177186 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 19:41:31.816356  414625 out.go:176] * Starting control plane node custom-weave-20220412193701-177186 in cluster custom-weave-20220412193701-177186
	I0412 19:41:31.816382  414625 cache.go:120] Beginning downloading kic base image for docker with docker
	I0412 19:41:31.817918  414625 out.go:176] * Pulling base image ...
	I0412 19:41:31.817950  414625 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0412 19:41:31.817975  414625 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0412 19:41:31.817979  414625 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0412 19:41:31.817996  414625 cache.go:57] Caching tarball of preloaded images
	I0412 19:41:31.818211  414625 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0412 19:41:31.818226  414625 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0412 19:41:31.818351  414625 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412193701-177186/config.json ...
	I0412 19:41:31.818377  414625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412193701-177186/config.json: {Name:mk9a19aa791e1764fdd51e1ec9caec4d13ec405a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:41:31.867931  414625 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0412 19:41:31.867957  414625 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0412 19:41:31.867971  414625 cache.go:206] Successfully downloaded all kic artifacts
	I0412 19:41:31.868011  414625 start.go:352] acquiring machines lock for custom-weave-20220412193701-177186: {Name:mk2d1bdcfe90db3686736200cbef92dfbbfdb6ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 19:41:31.868118  414625 start.go:356] acquired machines lock for "custom-weave-20220412193701-177186" in 87.583µs
	I0412 19:41:31.868142  414625 start.go:91] Provisioning new machine with config: &{Name:custom-weave-20220412193701-177186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:custom-weave-20220412193701-177186 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0412 19:41:31.868208  414625 start.go:131] createHost starting for "" (driver="docker")
	I0412 19:41:31.871511  414625 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0412 19:41:31.871759  414625 start.go:165] libmachine.API.Create for "custom-weave-20220412193701-177186" (driver="docker")
	I0412 19:41:31.871791  414625 client.go:168] LocalClient.Create starting
	I0412 19:41:31.871840  414625 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem
	I0412 19:41:31.871866  414625 main.go:134] libmachine: Decoding PEM data...
	I0412 19:41:31.871878  414625 main.go:134] libmachine: Parsing certificate...
	I0412 19:41:31.871928  414625 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem
	I0412 19:41:31.871947  414625 main.go:134] libmachine: Decoding PEM data...
	I0412 19:41:31.871959  414625 main.go:134] libmachine: Parsing certificate...
	I0412 19:41:31.872275  414625 cli_runner.go:164] Run: docker network inspect custom-weave-20220412193701-177186 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0412 19:41:31.904176  414625 cli_runner.go:211] docker network inspect custom-weave-20220412193701-177186 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0412 19:41:31.904245  414625 network_create.go:272] running [docker network inspect custom-weave-20220412193701-177186] to gather additional debugging logs...
	I0412 19:41:31.904272  414625 cli_runner.go:164] Run: docker network inspect custom-weave-20220412193701-177186
	W0412 19:41:31.938543  414625 cli_runner.go:211] docker network inspect custom-weave-20220412193701-177186 returned with exit code 1
	I0412 19:41:31.938573  414625 network_create.go:275] error running [docker network inspect custom-weave-20220412193701-177186]: docker network inspect custom-weave-20220412193701-177186: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: custom-weave-20220412193701-177186
	I0412 19:41:31.938590  414625 network_create.go:277] output of [docker network inspect custom-weave-20220412193701-177186]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: custom-weave-20220412193701-177186
	
	** /stderr **
	I0412 19:41:31.938638  414625 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0412 19:41:31.974556  414625 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000400058] misses:0}
	I0412 19:41:31.974625  414625 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0412 19:41:31.974659  414625 network_create.go:115] attempt to create docker network custom-weave-20220412193701-177186 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0412 19:41:31.974729  414625 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220412193701-177186
	I0412 19:41:32.053148  414625 network_create.go:99] docker network custom-weave-20220412193701-177186 192.168.49.0/24 created
	I0412 19:41:32.053189  414625 kic.go:106] calculated static IP "192.168.49.2" for the "custom-weave-20220412193701-177186" container
	I0412 19:41:32.053249  414625 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0412 19:41:32.092400  414625 cli_runner.go:164] Run: docker volume create custom-weave-20220412193701-177186 --label name.minikube.sigs.k8s.io=custom-weave-20220412193701-177186 --label created_by.minikube.sigs.k8s.io=true
	I0412 19:41:32.133148  414625 oci.go:103] Successfully created a docker volume custom-weave-20220412193701-177186
	I0412 19:41:32.133251  414625 cli_runner.go:164] Run: docker run --rm --name custom-weave-20220412193701-177186-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220412193701-177186 --entrypoint /usr/bin/test -v custom-weave-20220412193701-177186:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0412 19:41:32.895827  414625 oci.go:107] Successfully prepared a docker volume custom-weave-20220412193701-177186
	I0412 19:41:32.895882  414625 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0412 19:41:32.895905  414625 kic.go:179] Starting extracting preloaded images to volume ...
	I0412 19:41:32.895977  414625 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20220412193701-177186:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0412 19:41:38.891494  414625 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20220412193701-177186:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (5.995452522s)
	I0412 19:41:38.891530  414625 kic.go:188] duration metric: took 5.995620 seconds to extract preloaded images to volume
	W0412 19:41:38.891576  414625 oci.go:136] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0412 19:41:38.891585  414625 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0412 19:41:38.891648  414625 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0412 19:41:39.038749  414625 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20220412193701-177186 --name custom-weave-20220412193701-177186 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220412193701-177186 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20220412193701-177186 --network custom-weave-20220412193701-177186 --ip 192.168.49.2 --volume custom-weave-20220412193701-177186:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
	I0412 19:41:39.679741  414625 cli_runner.go:164] Run: docker container inspect custom-weave-20220412193701-177186 --format={{.State.Running}}
	I0412 19:41:39.726392  414625 cli_runner.go:164] Run: docker container inspect custom-weave-20220412193701-177186 --format={{.State.Status}}
	I0412 19:41:39.773582  414625 cli_runner.go:164] Run: docker exec custom-weave-20220412193701-177186 stat /var/lib/dpkg/alternatives/iptables
	I0412 19:41:39.889337  414625 oci.go:279] the created container "custom-weave-20220412193701-177186" has a running status.
	I0412 19:41:39.889376  414625 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/custom-weave-20220412193701-177186/id_rsa...
	I0412 19:41:40.121704  414625 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/custom-weave-20220412193701-177186/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0412 19:41:40.235800  414625 cli_runner.go:164] Run: docker container inspect custom-weave-20220412193701-177186 --format={{.State.Status}}
	I0412 19:41:40.284209  414625 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0412 19:41:40.284237  414625 kic_runner.go:114] Args: [docker exec --privileged custom-weave-20220412193701-177186 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0412 19:41:40.399805  414625 cli_runner.go:164] Run: docker container inspect custom-weave-20220412193701-177186 --format={{.State.Status}}
	I0412 19:41:40.441086  414625 machine.go:88] provisioning docker machine ...
	I0412 19:41:40.441137  414625 ubuntu.go:169] provisioning hostname "custom-weave-20220412193701-177186"
	I0412 19:41:40.441200  414625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220412193701-177186
	I0412 19:41:40.477862  414625 main.go:134] libmachine: Using SSH client type: native
	I0412 19:41:40.478072  414625 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49384 <nil> <nil>}
	I0412 19:41:40.478101  414625 main.go:134] libmachine: About to run SSH command:
	sudo hostname custom-weave-20220412193701-177186 && echo "custom-weave-20220412193701-177186" | sudo tee /etc/hostname
	I0412 19:41:40.636916  414625 main.go:134] libmachine: SSH cmd err, output: <nil>: custom-weave-20220412193701-177186
	
	I0412 19:41:40.637076  414625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220412193701-177186
	I0412 19:41:40.680725  414625 main.go:134] libmachine: Using SSH client type: native
	I0412 19:41:40.680896  414625 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49384 <nil> <nil>}
	I0412 19:41:40.680928  414625 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-weave-20220412193701-177186' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-weave-20220412193701-177186/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-weave-20220412193701-177186' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0412 19:41:40.810612  414625 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0412 19:41:40.810643  414625 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem
ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0412 19:41:40.810668  414625 ubuntu.go:177] setting up certificates
	I0412 19:41:40.810679  414625 provision.go:83] configureAuth start
	I0412 19:41:40.810731  414625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220412193701-177186
	I0412 19:41:40.858281  414625 provision.go:138] copyHostCerts
	I0412 19:41:40.858347  414625 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0412 19:41:40.858360  414625 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0412 19:41:40.858427  414625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0412 19:41:40.858530  414625 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0412 19:41:40.858547  414625 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0412 19:41:40.858589  414625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1679 bytes)
	I0412 19:41:40.858656  414625 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0412 19:41:40.858666  414625 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0412 19:41:40.858687  414625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0412 19:41:40.858733  414625 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.custom-weave-20220412193701-177186 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube custom-weave-20220412193701-177186]
	I0412 19:41:41.011944  414625 provision.go:172] copyRemoteCerts
	I0412 19:41:41.012006  414625 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0412 19:41:41.012057  414625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220412193701-177186
	I0412 19:41:41.059337  414625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/custom-weave-20220412193701-177186/id_rsa Username:docker}
	I0412 19:41:41.155334  414625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0412 19:41:41.180448  414625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0412 19:41:41.214047  414625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0412 19:41:41.238349  414625 provision.go:86] duration metric: configureAuth took 427.650202ms
	I0412 19:41:41.238382  414625 ubuntu.go:193] setting minikube options for container-runtime
	I0412 19:41:41.238566  414625 config.go:178] Loaded profile config "custom-weave-20220412193701-177186": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0412 19:41:41.238624  414625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220412193701-177186
	I0412 19:41:41.280253  414625 main.go:134] libmachine: Using SSH client type: native
	I0412 19:41:41.280449  414625 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49384 <nil> <nil>}
	I0412 19:41:41.280475  414625 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0412 19:41:41.420902  414625 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0412 19:41:41.420925  414625 ubuntu.go:71] root file system type: overlay
	I0412 19:41:41.421194  414625 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0412 19:41:41.421276  414625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220412193701-177186
	I0412 19:41:41.466524  414625 main.go:134] libmachine: Using SSH client type: native
	I0412 19:41:41.466719  414625 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49384 <nil> <nil>}
	I0412 19:41:41.466826  414625 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0412 19:41:41.661843  414625 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0412 19:41:41.661921  414625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220412193701-177186
	I0412 19:41:41.723813  414625 main.go:134] libmachine: Using SSH client type: native
	I0412 19:41:41.724003  414625 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49384 <nil> <nil>}
	I0412 19:41:41.724033  414625 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0412 19:41:42.723267  414625 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-03-10 14:05:44.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-04-12 19:41:41.655357303 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0412 19:41:42.723305  414625 machine.go:91] provisioned docker machine in 2.282185424s
	I0412 19:41:42.723317  414625 client.go:171] LocalClient.Create took 10.851520654s
	I0412 19:41:42.723330  414625 start.go:173] duration metric: libmachine.API.Create for "custom-weave-20220412193701-177186" took 10.851571632s
	I0412 19:41:42.723347  414625 start.go:306] post-start starting for "custom-weave-20220412193701-177186" (driver="docker")
	I0412 19:41:42.723353  414625 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0412 19:41:42.723418  414625 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0412 19:41:42.723478  414625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220412193701-177186
	I0412 19:41:42.760233  414625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/custom-weave-20220412193701-177186/id_rsa Username:docker}
	I0412 19:41:42.860666  414625 ssh_runner.go:195] Run: cat /etc/os-release
	I0412 19:41:42.863629  414625 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0412 19:41:42.863660  414625 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0412 19:41:42.863675  414625 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0412 19:41:42.863684  414625 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0412 19:41:42.863699  414625 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0412 19:41:42.863752  414625 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0412 19:41:42.863813  414625 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/1771862.pem -> 1771862.pem in /etc/ssl/certs
	I0412 19:41:42.863881  414625 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0412 19:41:42.870524  414625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/1771862.pem --> /etc/ssl/certs/1771862.pem (1708 bytes)
	I0412 19:41:42.894534  414625 start.go:309] post-start completed in 171.171297ms
	I0412 19:41:42.894964  414625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220412193701-177186
	I0412 19:41:42.936415  414625 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412193701-177186/config.json ...
	I0412 19:41:42.936647  414625 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0412 19:41:42.936697  414625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220412193701-177186
	I0412 19:41:42.969617  414625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/custom-weave-20220412193701-177186/id_rsa Username:docker}
	I0412 19:41:43.058163  414625 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0412 19:41:43.063534  414625 start.go:134] duration metric: createHost completed in 11.195301772s
	I0412 19:41:43.063566  414625 start.go:81] releasing machines lock for "custom-weave-20220412193701-177186", held for 11.195434181s
	I0412 19:41:43.063657  414625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220412193701-177186
	I0412 19:41:43.110958  414625 ssh_runner.go:195] Run: systemctl --version
	I0412 19:41:43.111007  414625 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0412 19:41:43.111022  414625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220412193701-177186
	I0412 19:41:43.111073  414625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220412193701-177186
	I0412 19:41:43.152569  414625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/custom-weave-20220412193701-177186/id_rsa Username:docker}
	I0412 19:41:43.153364  414625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/custom-weave-20220412193701-177186/id_rsa Username:docker}
	I0412 19:41:43.262064  414625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0412 19:41:43.273088  414625 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0412 19:41:43.282989  414625 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0412 19:41:43.283049  414625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0412 19:41:43.293450  414625 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0412 19:41:43.347856  414625 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0412 19:41:43.428452  414625 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0412 19:41:43.512332  414625 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0412 19:41:43.523529  414625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0412 19:41:43.615277  414625 ssh_runner.go:195] Run: sudo systemctl start docker
	I0412 19:41:43.629146  414625 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0412 19:41:43.674166  414625 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0412 19:41:44.013245  414625 out.go:203] * Preparing Kubernetes v1.23.5 on Docker 20.10.13 ...
	I0412 19:41:44.013355  414625 cli_runner.go:164] Run: docker network inspect custom-weave-20220412193701-177186 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0412 19:41:44.063525  414625 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0412 19:41:44.067817  414625 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 19:41:44.093943  414625 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0412 19:41:44.094014  414625 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0412 19:41:44.137142  414625 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0412 19:41:44.137174  414625 docker.go:537] Images already preloaded, skipping extraction
	I0412 19:41:44.137229  414625 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0412 19:41:44.170815  414625 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0412 19:41:44.170843  414625 cache_images.go:84] Images are preloaded, skipping loading
	I0412 19:41:44.170892  414625 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0412 19:41:44.281793  414625 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0412 19:41:44.281841  414625 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0412 19:41:44.281862  414625 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-weave-20220412193701-177186 NodeName:custom-weave-20220412193701-177186 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCA
File:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0412 19:41:44.282011  414625 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "custom-weave-20220412193701-177186"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0412 19:41:44.282089  414625 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=custom-weave-20220412193701-177186 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:custom-weave-20220412193701-177186 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:}
	I0412 19:41:44.282135  414625 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0412 19:41:44.317153  414625 binaries.go:44] Found k8s binaries, skipping transfer
	I0412 19:41:44.317230  414625 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0412 19:41:44.324220  414625 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0412 19:41:44.340942  414625 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0412 19:41:44.364450  414625 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2056 bytes)
	I0412 19:41:44.384549  414625 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0412 19:41:44.388725  414625 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 19:41:44.471010  414625 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412193701-177186 for IP: 192.168.49.2
	I0412 19:41:44.471176  414625 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0412 19:41:44.471239  414625 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0412 19:41:44.471315  414625 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412193701-177186/client.key
	I0412 19:41:44.471339  414625 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412193701-177186/client.crt with IP's: []
	I0412 19:41:44.796932  414625 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412193701-177186/client.crt ...
	I0412 19:41:44.796968  414625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412193701-177186/client.crt: {Name:mk3ae2df2bd0b563a52bf0f19ebcbfb46a5c3bfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:41:44.797211  414625 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412193701-177186/client.key ...
	I0412 19:41:44.797231  414625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412193701-177186/client.key: {Name:mk69feced50bc8872bb7eb3651747d5387c8c8a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:41:44.797356  414625 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412193701-177186/apiserver.key.dd3b5fb2
	I0412 19:41:44.797377  414625 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412193701-177186/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0412 19:41:44.924966  414625 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412193701-177186/apiserver.crt.dd3b5fb2 ...
	I0412 19:41:44.925015  414625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412193701-177186/apiserver.crt.dd3b5fb2: {Name:mk8d5048a0d4a62aa306b4dd0b2f979b8b6b1915 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:41:44.966013  414625 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412193701-177186/apiserver.key.dd3b5fb2 ...
	I0412 19:41:44.966051  414625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412193701-177186/apiserver.key.dd3b5fb2: {Name:mkcc0f5e5163fe28b615a2daeb711e93936477af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:41:44.966206  414625 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412193701-177186/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412193701-177186/apiserver.crt
	I0412 19:41:44.966300  414625 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412193701-177186/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412193701-177186/apiserver.key
	I0412 19:41:44.966383  414625 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412193701-177186/proxy-client.key
	I0412 19:41:44.966404  414625 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412193701-177186/proxy-client.crt with IP's: []
	I0412 19:41:45.200619  414625 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412193701-177186/proxy-client.crt ...
	I0412 19:41:45.200663  414625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412193701-177186/proxy-client.crt: {Name:mkdbd3a0f7a159d8e39050247216d30fa1208e2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:41:45.283165  414625 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412193701-177186/proxy-client.key ...
	I0412 19:41:45.283199  414625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412193701-177186/proxy-client.key: {Name:mk4e84e261317178dd73bb7c5016f94bf555b2e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:41:45.283461  414625 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/177186.pem (1338 bytes)
	W0412 19:41:45.283518  414625 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/177186_empty.pem, impossibly tiny 0 bytes
	I0412 19:41:45.283535  414625 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1679 bytes)
	I0412 19:41:45.283570  414625 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0412 19:41:45.283614  414625 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0412 19:41:45.283642  414625 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1679 bytes)
	I0412 19:41:45.283698  414625 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/1771862.pem (1708 bytes)
	I0412 19:41:45.284434  414625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412193701-177186/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0412 19:41:45.304683  414625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412193701-177186/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0412 19:41:45.322208  414625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412193701-177186/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0412 19:41:45.340584  414625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412193701-177186/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0412 19:41:45.491716  414625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0412 19:41:45.645641  414625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0412 19:41:45.665439  414625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0412 19:41:45.682679  414625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0412 19:41:45.700510  414625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/1771862.pem --> /usr/share/ca-certificates/1771862.pem (1708 bytes)
	I0412 19:41:45.718136  414625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0412 19:41:45.734998  414625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/177186.pem --> /usr/share/ca-certificates/177186.pem (1338 bytes)
	I0412 19:41:45.752374  414625 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0412 19:41:45.764805  414625 ssh_runner.go:195] Run: openssl version
	I0412 19:41:45.770117  414625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177186.pem && ln -fs /usr/share/ca-certificates/177186.pem /etc/ssl/certs/177186.pem"
	I0412 19:41:45.803369  414625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177186.pem
	I0412 19:41:45.807474  414625 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Apr 12 19:16 /usr/share/ca-certificates/177186.pem
	I0412 19:41:45.807533  414625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177186.pem
	I0412 19:41:45.902292  414625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/177186.pem /etc/ssl/certs/51391683.0"
	I0412 19:41:45.912009  414625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1771862.pem && ln -fs /usr/share/ca-certificates/1771862.pem /etc/ssl/certs/1771862.pem"
	I0412 19:41:45.921145  414625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1771862.pem
	I0412 19:41:45.924623  414625 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Apr 12 19:16 /usr/share/ca-certificates/1771862.pem
	I0412 19:41:45.924686  414625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1771862.pem
	I0412 19:41:45.931095  414625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1771862.pem /etc/ssl/certs/3ec20f2e.0"
	I0412 19:41:45.939980  414625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0412 19:41:45.948210  414625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0412 19:41:45.951505  414625 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Apr 12 19:13 /usr/share/ca-certificates/minikubeCA.pem
	I0412 19:41:45.951554  414625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0412 19:41:45.957201  414625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0412 19:41:45.965517  414625 kubeadm.go:391] StartCluster: {Name:custom-weave-20220412193701-177186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:custom-weave-20220412193701-177186 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false}
	I0412 19:41:45.965663  414625 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0412 19:41:46.003556  414625 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0412 19:41:46.011532  414625 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 19:41:46.107608  414625 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0412 19:41:46.107683  414625 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 19:41:46.115551  414625 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0412 19:41:46.115596  414625 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0412 19:41:46.844094  414625 out.go:203]   - Generating certificates and keys ...
	I0412 19:41:49.791421  414625 out.go:203]   - Booting up control plane ...
	I0412 19:41:57.339353  414625 out.go:203]   - Configuring RBAC rules ...
	I0412 19:41:57.759154  414625 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0412 19:41:57.761891  414625 out.go:176] * Configuring testdata/weavenet.yaml (Container Networking Interface) ...
	I0412 19:41:57.761968  414625 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0412 19:41:57.762059  414625 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I0412 19:41:57.766305  414625 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/tmp/minikube/cni.yaml': No such file or directory
	I0412 19:41:57.766338  414625 ssh_runner.go:362] scp testdata/weavenet.yaml --> /var/tmp/minikube/cni.yaml (10948 bytes)
	I0412 19:41:57.797251  414625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 19:41:59.064265  414625 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.266960638s)
	I0412 19:41:59.064342  414625 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0412 19:41:59.064403  414625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:41:59.064406  414625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f minikube.k8s.io/name=custom-weave-20220412193701-177186 minikube.k8s.io/updated_at=2022_04_12T19_41_59_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:41:59.071491  414625 ops.go:34] apiserver oom_adj: -16
	I0412 19:41:59.167205  414625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:41:59.753441  414625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:42:00.253131  414625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:42:00.753801  414625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:42:01.252928  414625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:42:01.753443  414625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:42:02.253152  414625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:42:02.753117  414625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:42:03.253141  414625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:42:03.753792  414625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:42:04.253527  414625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:42:04.753367  414625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:42:05.253442  414625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:42:05.753366  414625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:42:06.253394  414625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:42:06.753890  414625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:42:07.253516  414625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:42:07.753104  414625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:42:08.253561  414625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:42:08.753530  414625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:42:09.253772  414625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:42:09.753531  414625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:42:10.252943  414625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:42:10.321619  414625 kubeadm.go:1020] duration metric: took 11.257263181s to wait for elevateKubeSystemPrivileges.
	I0412 19:42:10.321662  414625 kubeadm.go:393] StartCluster complete in 24.356156357s
	I0412 19:42:10.321686  414625 settings.go:142] acquiring lock: {Name:mk2e99ebf61b9636eb8e70d244b6d08a5c7b2cff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:42:10.321787  414625 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 19:42:10.323335  414625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk98070a3665a9ff78efa8315b027fd3f059f957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:42:10.841248  414625 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "custom-weave-20220412193701-177186" rescaled to 1
	I0412 19:42:10.841323  414625 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0412 19:42:10.843168  414625 out.go:176] * Verifying Kubernetes components...
	I0412 19:42:10.841390  414625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0412 19:42:10.843259  414625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 19:42:10.841424  414625 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0412 19:42:10.841613  414625 config.go:178] Loaded profile config "custom-weave-20220412193701-177186": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0412 19:42:10.843326  414625 addons.go:65] Setting default-storageclass=true in profile "custom-weave-20220412193701-177186"
	I0412 19:42:10.843355  414625 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-weave-20220412193701-177186"
	I0412 19:42:10.843326  414625 addons.go:65] Setting storage-provisioner=true in profile "custom-weave-20220412193701-177186"
	I0412 19:42:10.843439  414625 addons.go:153] Setting addon storage-provisioner=true in "custom-weave-20220412193701-177186"
	W0412 19:42:10.843451  414625 addons.go:165] addon storage-provisioner should already be in state true
	I0412 19:42:10.843507  414625 host.go:66] Checking if "custom-weave-20220412193701-177186" exists ...
	I0412 19:42:10.843772  414625 cli_runner.go:164] Run: docker container inspect custom-weave-20220412193701-177186 --format={{.State.Status}}
	I0412 19:42:10.844003  414625 cli_runner.go:164] Run: docker container inspect custom-weave-20220412193701-177186 --format={{.State.Status}}
	I0412 19:42:10.908190  414625 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0412 19:42:10.908336  414625 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 19:42:10.908355  414625 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0412 19:42:10.908412  414625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220412193701-177186
	I0412 19:42:10.909457  414625 addons.go:153] Setting addon default-storageclass=true in "custom-weave-20220412193701-177186"
	W0412 19:42:10.909480  414625 addons.go:165] addon default-storageclass should already be in state true
	I0412 19:42:10.909511  414625 host.go:66] Checking if "custom-weave-20220412193701-177186" exists ...
	I0412 19:42:10.910021  414625 cli_runner.go:164] Run: docker container inspect custom-weave-20220412193701-177186 --format={{.State.Status}}
	I0412 19:42:10.956439  414625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/custom-weave-20220412193701-177186/id_rsa Username:docker}
	I0412 19:42:10.957216  414625 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0412 19:42:10.957243  414625 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0412 19:42:10.957279  414625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220412193701-177186
	I0412 19:42:11.001072  414625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/custom-weave-20220412193701-177186/id_rsa Username:docker}
	I0412 19:42:11.001623  414625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0412 19:42:11.003680  414625 node_ready.go:35] waiting up to 5m0s for node "custom-weave-20220412193701-177186" to be "Ready" ...
	I0412 19:42:11.007804  414625 node_ready.go:49] node "custom-weave-20220412193701-177186" has status "Ready":"True"
	I0412 19:42:11.007832  414625 node_ready.go:38] duration metric: took 4.11697ms waiting for node "custom-weave-20220412193701-177186" to be "Ready" ...
	I0412 19:42:11.007844  414625 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0412 19:42:11.016883  414625 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-9bvz8" in "kube-system" namespace to be "Ready" ...
	I0412 19:42:11.093672  414625 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 19:42:11.202436  414625 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0412 19:42:11.483532  414625 start.go:777] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0412 19:42:11.690074  414625 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0412 19:42:11.690115  414625 addons.go:417] enableAddons completed in 848.692199ms
	I0412 19:42:13.031711  414625 pod_ready.go:102] pod "coredns-64897985d-9bvz8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:15.029174  414625 pod_ready.go:97] error getting pod "coredns-64897985d-9bvz8" in "kube-system" namespace (skipping!): pods "coredns-64897985d-9bvz8" not found
	I0412 19:42:15.029227  414625 pod_ready.go:81] duration metric: took 4.012307496s waiting for pod "coredns-64897985d-9bvz8" in "kube-system" namespace to be "Ready" ...
	E0412 19:42:15.029241  414625 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-64897985d-9bvz8" in "kube-system" namespace (skipping!): pods "coredns-64897985d-9bvz8" not found
	I0412 19:42:15.029250  414625 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-p76mp" in "kube-system" namespace to be "Ready" ...
	I0412 19:42:17.088952  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:19.089025  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:21.090597  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:23.589124  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:26.089174  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:28.090863  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:30.589112  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:33.089076  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:35.588663  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:37.589541  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:40.088679  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:42.589095  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:45.088553  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:47.089734  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:49.588863  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:52.088073  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:54.089111  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:56.089684  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:42:58.589426  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:00.589788  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:02.593353  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:05.089550  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:07.089595  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:09.589560  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:11.589875  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:14.088934  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:16.089752  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:18.588414  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:20.589745  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:23.089204  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:25.589183  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:28.088833  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:30.089364  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:32.589353  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:35.088523  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:37.589986  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:40.088374  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:42.089173  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:44.588862  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:47.089239  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:49.089737  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:51.090004  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:53.589426  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:55.589551  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:43:58.088650  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:00.589421  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:03.088991  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:05.089478  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:07.588236  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:09.589691  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:11.590192  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:14.088475  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:16.089810  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:18.589209  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:20.589333  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:22.589538  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:25.088927  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:27.089826  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:29.588875  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:31.589544  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:34.089151  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:36.089278  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:38.588649  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:41.089350  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:43.089684  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:45.589312  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:48.088527  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:50.590553  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:53.088516  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:55.089311  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:57.089483  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:44:59.095388  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:01.589098  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:03.589426  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:05.589498  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:07.589598  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:10.088222  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:12.088396  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:14.089222  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:16.089327  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:18.588738  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:20.593903  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:23.089724  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:25.588648  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:27.589100  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:29.589352  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:32.088411  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:34.090164  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:36.090924  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:38.588779  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:40.589173  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:43.088315  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:45.588426  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:47.588919  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:49.591580  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:52.088571  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:54.589114  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:57.089197  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:45:59.588833  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:01.589254  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:03.589443  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:06.088676  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:08.589340  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:11.088616  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:13.089152  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:15.091891  414625 pod_ready.go:102] pod "coredns-64897985d-p76mp" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:15.091916  414625 pod_ready.go:81] duration metric: took 4m0.062651138s waiting for pod "coredns-64897985d-p76mp" in "kube-system" namespace to be "Ready" ...
	E0412 19:46:15.091926  414625 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0412 19:46:15.091935  414625 pod_ready.go:78] waiting up to 5m0s for pod "etcd-custom-weave-20220412193701-177186" in "kube-system" namespace to be "Ready" ...
	I0412 19:46:15.095795  414625 pod_ready.go:92] pod "etcd-custom-weave-20220412193701-177186" in "kube-system" namespace has status "Ready":"True"
	I0412 19:46:15.095822  414625 pod_ready.go:81] duration metric: took 3.878938ms waiting for pod "etcd-custom-weave-20220412193701-177186" in "kube-system" namespace to be "Ready" ...
	I0412 19:46:15.095832  414625 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-custom-weave-20220412193701-177186" in "kube-system" namespace to be "Ready" ...
	I0412 19:46:15.101194  414625 pod_ready.go:92] pod "kube-apiserver-custom-weave-20220412193701-177186" in "kube-system" namespace has status "Ready":"True"
	I0412 19:46:15.101216  414625 pod_ready.go:81] duration metric: took 5.376136ms waiting for pod "kube-apiserver-custom-weave-20220412193701-177186" in "kube-system" namespace to be "Ready" ...
	I0412 19:46:15.101227  414625 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-custom-weave-20220412193701-177186" in "kube-system" namespace to be "Ready" ...
	I0412 19:46:15.104859  414625 pod_ready.go:92] pod "kube-controller-manager-custom-weave-20220412193701-177186" in "kube-system" namespace has status "Ready":"True"
	I0412 19:46:15.104882  414625 pod_ready.go:81] duration metric: took 3.646871ms waiting for pod "kube-controller-manager-custom-weave-20220412193701-177186" in "kube-system" namespace to be "Ready" ...
	I0412 19:46:15.104895  414625 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-drdh9" in "kube-system" namespace to be "Ready" ...
	I0412 19:46:15.487709  414625 pod_ready.go:92] pod "kube-proxy-drdh9" in "kube-system" namespace has status "Ready":"True"
	I0412 19:46:15.487732  414625 pod_ready.go:81] duration metric: took 382.829505ms waiting for pod "kube-proxy-drdh9" in "kube-system" namespace to be "Ready" ...
	I0412 19:46:15.487744  414625 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-custom-weave-20220412193701-177186" in "kube-system" namespace to be "Ready" ...
	I0412 19:46:15.886917  414625 pod_ready.go:92] pod "kube-scheduler-custom-weave-20220412193701-177186" in "kube-system" namespace has status "Ready":"True"
	I0412 19:46:15.886944  414625 pod_ready.go:81] duration metric: took 399.191747ms waiting for pod "kube-scheduler-custom-weave-20220412193701-177186" in "kube-system" namespace to be "Ready" ...
	I0412 19:46:15.886958  414625 pod_ready.go:78] waiting up to 5m0s for pod "weave-net-dzbgc" in "kube-system" namespace to be "Ready" ...
	I0412 19:46:18.292345  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:20.294401  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:22.792814  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:24.792898  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:26.794165  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:29.292781  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:31.293548  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:33.294275  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:35.303061  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:37.793243  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:39.793533  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:41.793870  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:44.292756  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:46.293692  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:48.794042  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:51.293610  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:53.791906  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:55.792387  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:57.794061  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:46:59.796697  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:02.295896  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:04.794169  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:07.293614  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:09.792877  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:11.793287  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:13.794456  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:16.292498  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:18.794040  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:21.293333  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:23.792870  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:25.794048  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:28.292347  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:30.293416  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:32.293822  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:34.793672  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:36.794128  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:39.292594  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:41.293535  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:43.794415  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:46.293568  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:48.294098  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:50.792597  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:53.293645  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:55.793860  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:47:58.292965  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:00.293401  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:02.793761  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:04.794457  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:07.293151  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:09.294125  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:11.794512  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:14.292546  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:16.293068  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:18.293793  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:20.295850  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:22.793774  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:25.291996  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:27.293708  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:29.793403  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:32.293695  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:34.793822  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:37.294454  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:39.793056  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:42.293324  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:44.293514  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:46.793210  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:49.293254  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:51.293871  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:53.794076  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:55.794288  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:48:58.292957  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:00.296188  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:02.801299  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:05.293112  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:07.793015  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:09.793395  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:11.794352  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:14.292135  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:16.293084  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:18.294120  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:20.294983  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:22.794016  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:25.292172  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:27.294220  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:29.793092  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:31.794316  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:34.292492  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:36.294494  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:38.792882  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:40.793469  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:42.795034  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:45.295833  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:47.792829  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:49.793116  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:52.293143  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:54.294045  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:56.960691  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:49:59.293360  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:50:01.793582  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:50:03.793912  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:50:05.796377  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:50:08.293475  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:50:10.294250  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:50:12.295673  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:50:14.793145  414625 pod_ready.go:102] pod "weave-net-dzbgc" in "kube-system" namespace has status "Ready":"False"
	I0412 19:50:16.297837  414625 pod_ready.go:81] duration metric: took 4m0.410864903s waiting for pod "weave-net-dzbgc" in "kube-system" namespace to be "Ready" ...
	E0412 19:50:16.297875  414625 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0412 19:50:16.297883  414625 pod_ready.go:38] duration metric: took 8m5.290025228s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0412 19:50:16.297912  414625 api_server.go:51] waiting for apiserver process to appear ...
	I0412 19:50:16.300410  414625 out.go:176] 
	W0412 19:50:16.300564  414625 out.go:241] X Exiting due to K8S_APISERVER_MISSING: wait 5m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 5m0s for node: wait for apiserver proc: apiserver process never appeared
	W0412 19:50:16.300646  414625 out.go:241] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W0412 19:50:16.300665  414625 out.go:241] * Related issues:
	* Related issues:
	W0412 19:50:16.300711  414625 out.go:241]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W0412 19:50:16.300801  414625 out.go:241]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I0412 19:50:16.302343  414625 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:100: failed start: exit status 105
--- FAIL: TestNetworkPlugins/group/custom-weave/Start (524.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (352.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:162: (dbg) Run:  kubectl --context kindnet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context kindnet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.161871887s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context kindnet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context kindnet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.146200533s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context kindnet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context kindnet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.156290581s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context kindnet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default
E0412 19:44:40.347444  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412191306-177186/client.crt: no such file or directory
E0412 19:44:51.552811  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/skaffold-20220412193409-177186/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context kindnet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.159408576s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context kindnet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context kindnet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.154608885s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context kindnet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default
E0412 19:45:19.236533  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/skaffold-20220412193409-177186/client.crt: no such file or directory
E0412 19:45:24.186622  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412191917-177186/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context kindnet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.153712341s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context kindnet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context kindnet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.153572747s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context kindnet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default
E0412 19:46:09.091436  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412193701-177186/client.crt: no such file or directory
E0412 19:46:09.096769  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412193701-177186/client.crt: no such file or directory
E0412 19:46:09.107051  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412193701-177186/client.crt: no such file or directory
E0412 19:46:09.128078  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412193701-177186/client.crt: no such file or directory
E0412 19:46:09.168326  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412193701-177186/client.crt: no such file or directory
E0412 19:46:09.248637  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412193701-177186/client.crt: no such file or directory
E0412 19:46:09.409033  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412193701-177186/client.crt: no such file or directory
E0412 19:46:09.729300  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412193701-177186/client.crt: no such file or directory
E0412 19:46:10.369814  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412193701-177186/client.crt: no such file or directory
E0412 19:46:11.650322  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412193701-177186/client.crt: no such file or directory
E0412 19:46:14.211144  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412193701-177186/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context kindnet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.132215682s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0412 19:46:18.115836  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/false-20220412193701-177186/client.crt: no such file or directory
E0412 19:46:18.121082  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/false-20220412193701-177186/client.crt: no such file or directory
E0412 19:46:18.131340  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/false-20220412193701-177186/client.crt: no such file or directory
E0412 19:46:18.151552  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/false-20220412193701-177186/client.crt: no such file or directory
E0412 19:46:18.191787  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/false-20220412193701-177186/client.crt: no such file or directory
E0412 19:46:18.272075  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/false-20220412193701-177186/client.crt: no such file or directory
E0412 19:46:18.432723  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/false-20220412193701-177186/client.crt: no such file or directory
E0412 19:46:18.753311  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/false-20220412193701-177186/client.crt: no such file or directory
E0412 19:46:19.332027  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412193701-177186/client.crt: no such file or directory
E0412 19:46:19.394163  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/false-20220412193701-177186/client.crt: no such file or directory
E0412 19:46:20.675192  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/false-20220412193701-177186/client.crt: no such file or directory
E0412 19:46:23.236151  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/false-20220412193701-177186/client.crt: no such file or directory
E0412 19:46:28.356702  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/false-20220412193701-177186/client.crt: no such file or directory
E0412 19:46:29.572249  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412193701-177186/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:162: (dbg) Run:  kubectl --context kindnet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:162: (dbg) Non-zero exit: kubectl --context kindnet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.17441551s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:162: (dbg) Run:  kubectl --context kindnet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:162: (dbg) Non-zero exit: kubectl --context kindnet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.132070994s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0412 19:47:19.267239  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412193701-177186/client.crt: no such file or directory
E0412 19:47:19.587784  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412193701-177186/client.crt: no such file or directory
E0412 19:47:20.228746  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412193701-177186/client.crt: no such file or directory
E0412 19:47:21.509799  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412193701-177186/client.crt: no such file or directory
E0412 19:47:24.070562  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412193701-177186/client.crt: no such file or directory
E0412 19:47:29.190797  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412193701-177186/client.crt: no such file or directory
E0412 19:47:31.013411  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412193701-177186/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:162: (dbg) Run:  kubectl --context kindnet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default
E0412 19:48:21.118571  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412191658-177186/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:162: (dbg) Non-zero exit: kubectl --context kindnet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.142865746s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:162: (dbg) Run:  kubectl --context kindnet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default
E0412 19:49:40.347394  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412191306-177186/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:162: (dbg) Non-zero exit: kubectl --context kindnet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.183355757s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:168: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:173: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/kindnet/DNS (352.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (323.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default
E0412 19:46:50.053142  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412193701-177186/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.153191592s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0412 19:46:59.077977  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/false-20220412193701-177186/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.139394716s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default
E0412 19:47:18.949602  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412193701-177186/client.crt: no such file or directory
E0412 19:47:18.954889  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412193701-177186/client.crt: no such file or directory
E0412 19:47:18.965124  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412193701-177186/client.crt: no such file or directory
E0412 19:47:18.985558  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412193701-177186/client.crt: no such file or directory
E0412 19:47:19.026365  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412193701-177186/client.crt: no such file or directory
E0412 19:47:19.106862  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412193701-177186/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.132244495s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default
E0412 19:47:39.431977  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412193701-177186/client.crt: no such file or directory
E0412 19:47:40.038257  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/false-20220412193701-177186/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.149228565s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default
E0412 19:47:59.912841  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412193701-177186/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.150825182s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.140919061s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default
E0412 19:48:40.873128  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412193701-177186/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.153498223s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0412 19:48:52.934573  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412193701-177186/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default
E0412 19:49:01.958696  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/false-20220412193701-177186/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.161784743s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.174569109s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.135436893s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default
E0412 19:51:09.091092  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412193701-177186/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.16301614s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.244185237s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:168: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:173: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/enable-default-cni/DNS (323.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (296.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:162: (dbg) Run:  kubectl --context bridge-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.159351959s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context bridge-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.127117442s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context bridge-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default
E0412 19:51:18.114916  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/false-20220412193701-177186/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.13170645s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context bridge-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default
E0412 19:51:36.774792  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412193701-177186/client.crt: no such file or directory
E0412 19:51:45.799915  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/false-20220412193701-177186/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.147836539s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context bridge-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.147637518s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:162: (dbg) Run:  kubectl --context bridge-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.133073571s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context bridge-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default
E0412 19:52:43.390843  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412191306-177186/client.crt: no such file or directory
E0412 19:52:46.634763  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412193701-177186/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.283362548s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context bridge-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136652948s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0412 19:53:21.118195  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412191658-177186/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context bridge-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default
E0412 19:53:35.307462  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412193701-177186/client.crt: no such file or directory
E0412 19:53:35.312711  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412193701-177186/client.crt: no such file or directory
E0412 19:53:35.322937  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412193701-177186/client.crt: no such file or directory
E0412 19:53:35.343158  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412193701-177186/client.crt: no such file or directory
E0412 19:53:35.383400  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412193701-177186/client.crt: no such file or directory
E0412 19:53:35.463652  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412193701-177186/client.crt: no such file or directory
E0412 19:53:35.624056  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412193701-177186/client.crt: no such file or directory
E0412 19:53:35.944650  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412193701-177186/client.crt: no such file or directory
E0412 19:53:36.585832  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412193701-177186/client.crt: no such file or directory
E0412 19:53:37.866700  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412193701-177186/client.crt: no such file or directory
E0412 19:53:40.427510  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412193701-177186/client.crt: no such file or directory
E0412 19:53:45.547667  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412193701-177186/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.127242624s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0412 19:53:55.788627  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412193701-177186/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context bridge-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default
E0412 19:54:16.269113  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412193701-177186/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.120130378s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:162: (dbg) Run:  kubectl --context bridge-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default
E0412 19:55:24.187505  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412191917-177186/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.133045463s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:168: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:173: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/bridge/DNS (296.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (367.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:162: (dbg) Run:  kubectl --context kubenet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default
E0412 19:54:51.552542  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/skaffold-20220412193409-177186/client.crt: no such file or directory
E0412 19:54:57.230199  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412193701-177186/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context kubenet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.126644831s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context kubenet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context kubenet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.142797756s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context kubenet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:162: (dbg) Non-zero exit: kubectl --context kubenet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.126714283s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:162: (dbg) Run:  kubectl --context kubenet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:162: (dbg) Non-zero exit: kubectl --context kubenet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.155221278s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context kubenet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:162: (dbg) Non-zero exit: kubectl --context kubenet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.135508045s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0412 19:56:14.597595  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/skaffold-20220412193409-177186/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context kubenet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default
E0412 19:56:18.115746  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/false-20220412193701-177186/client.crt: no such file or directory
E0412 19:56:19.151057  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412193701-177186/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:162: (dbg) Non-zero exit: kubectl --context kubenet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.169532136s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0412 19:56:33.655254  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412193701-177186/client.crt: no such file or directory
E0412 19:56:33.660478  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412193701-177186/client.crt: no such file or directory
E0412 19:56:33.670722  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412193701-177186/client.crt: no such file or directory
E0412 19:56:33.691010  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412193701-177186/client.crt: no such file or directory
E0412 19:56:33.731272  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412193701-177186/client.crt: no such file or directory
E0412 19:56:33.811545  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412193701-177186/client.crt: no such file or directory
E0412 19:56:33.971898  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412193701-177186/client.crt: no such file or directory
E0412 19:56:34.292867  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412193701-177186/client.crt: no such file or directory
E0412 19:56:34.933737  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412193701-177186/client.crt: no such file or directory
E0412 19:56:36.214205  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412193701-177186/client.crt: no such file or directory
E0412 19:56:38.775315  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412193701-177186/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context kubenet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:162: (dbg) Non-zero exit: kubectl --context kubenet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.13820677s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context kubenet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default
E0412 19:57:14.618177  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412193701-177186/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:162: (dbg) Non-zero exit: kubectl --context kubenet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.125836484s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:162: (dbg) Run:  kubectl --context kubenet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default
E0412 19:57:55.579397  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412193701-177186/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context kubenet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.130340694s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context kubenet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default
E0412 19:58:21.118272  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412191658-177186/client.crt: no such file or directory
E0412 19:58:27.233735  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412191917-177186/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context kubenet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.145419389s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0412 19:58:35.306696  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412193701-177186/client.crt: no such file or directory
E0412 19:59:02.991664  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412193701-177186/client.crt: no such file or directory
E0412 19:59:17.499602  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412193701-177186/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context kubenet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context kubenet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.132187662s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0412 19:59:40.348129  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412191306-177186/client.crt: no such file or directory
E0412 19:59:51.552821  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/skaffold-20220412193409-177186/client.crt: no such file or directory
E0412 20:00:24.186612  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412191917-177186/client.crt: no such file or directory
E0412 20:00:32.134508  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412193701-177186/client.crt: no such file or directory
E0412 20:00:32.139842  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412193701-177186/client.crt: no such file or directory
E0412 20:00:32.150072  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412193701-177186/client.crt: no such file or directory
E0412 20:00:32.170326  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412193701-177186/client.crt: no such file or directory
E0412 20:00:32.210609  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412193701-177186/client.crt: no such file or directory
E0412 20:00:32.290916  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412193701-177186/client.crt: no such file or directory
E0412 20:00:32.451313  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412193701-177186/client.crt: no such file or directory
E0412 20:00:32.772039  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412193701-177186/client.crt: no such file or directory
E0412 20:00:33.412194  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412193701-177186/client.crt: no such file or directory
E0412 20:00:34.693188  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412193701-177186/client.crt: no such file or directory
E0412 20:00:37.254372  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412193701-177186/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context kubenet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default
E0412 20:00:42.375474  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412193701-177186/client.crt: no such file or directory
E0412 20:00:52.616463  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412193701-177186/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context kubenet-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.262243523s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:168: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:173: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/kubenet/DNS (367.91s)
E0412 20:04:40.347774  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412191306-177186/client.crt: no such file or directory
E0412 20:04:40.544868  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kubenet-20220412193701-177186/client.crt: no such file or directory
E0412 20:04:40.550102  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kubenet-20220412193701-177186/client.crt: no such file or directory
E0412 20:04:40.560338  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kubenet-20220412193701-177186/client.crt: no such file or directory
E0412 20:04:40.580581  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kubenet-20220412193701-177186/client.crt: no such file or directory
E0412 20:04:40.620862  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kubenet-20220412193701-177186/client.crt: no such file or directory
E0412 20:04:40.701462  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kubenet-20220412193701-177186/client.crt: no such file or directory
E0412 20:04:40.862239  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kubenet-20220412193701-177186/client.crt: no such file or directory
E0412 20:04:41.182801  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kubenet-20220412193701-177186/client.crt: no such file or directory
E0412 20:04:41.823363  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kubenet-20220412193701-177186/client.crt: no such file or directory
E0412 20:04:43.103915  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kubenet-20220412193701-177186/client.crt: no such file or directory
E0412 20:04:45.664346  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kubenet-20220412193701-177186/client.crt: no such file or directory
E0412 20:04:50.785477  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kubenet-20220412193701-177186/client.crt: no such file or directory
E0412 20:04:51.552219  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/skaffold-20220412193409-177186/client.crt: no such file or directory
E0412 20:05:01.026167  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kubenet-20220412193701-177186/client.crt: no such file or directory

                                                
                                    

Test pass (258/285)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 9.96
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.23.5/json-events 4.1
11 TestDownloadOnly/v1.23.5/preload-exists 0
15 TestDownloadOnly/v1.23.5/LogsDuration 0.07
17 TestDownloadOnly/v1.23.6-rc.0/json-events 4.13
18 TestDownloadOnly/v1.23.6-rc.0/preload-exists 0
22 TestDownloadOnly/v1.23.6-rc.0/LogsDuration 0.07
23 TestDownloadOnly/DeleteAll 0.32
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.2
25 TestDownloadOnlyKic 2.59
26 TestBinaryMirror 0.83
27 TestOffline 64.84
29 TestAddons/Setup 93.79
31 TestAddons/parallel/Registry 12.57
32 TestAddons/parallel/Ingress 37.08
33 TestAddons/parallel/MetricsServer 5.57
34 TestAddons/parallel/HelmTiller 9.26
36 TestAddons/parallel/CSI 41.71
38 TestAddons/serial/GCPAuth 35.81
39 TestAddons/StoppedEnableDisable 11.06
40 TestCertOptions 32.46
41 TestCertExpiration 220.71
42 TestDockerFlags 27.61
43 TestForceSystemdFlag 28.79
44 TestForceSystemdEnv 240.81
45 TestKVMDriverInstallOrUpdate 3.88
49 TestErrorSpam/setup 24.07
50 TestErrorSpam/start 0.86
51 TestErrorSpam/status 1.09
52 TestErrorSpam/pause 1.39
53 TestErrorSpam/unpause 1.45
54 TestErrorSpam/stop 10.95
57 TestFunctional/serial/CopySyncFile 0
58 TestFunctional/serial/StartWithProxy 40.54
59 TestFunctional/serial/AuditLog 0
60 TestFunctional/serial/SoftStart 5.23
61 TestFunctional/serial/KubeContext 0.03
62 TestFunctional/serial/KubectlGetPods 0.16
65 TestFunctional/serial/CacheCmd/cache/add_remote 2.75
66 TestFunctional/serial/CacheCmd/cache/add_local 1.48
67 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.06
68 TestFunctional/serial/CacheCmd/cache/list 0.06
69 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.45
70 TestFunctional/serial/CacheCmd/cache/cache_reload 1.73
71 TestFunctional/serial/CacheCmd/cache/delete 0.12
72 TestFunctional/serial/MinikubeKubectlCmd 0.11
73 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
74 TestFunctional/serial/ExtraConfig 26.88
75 TestFunctional/serial/ComponentHealth 0.05
76 TestFunctional/serial/LogsCmd 1.22
77 TestFunctional/serial/LogsFileCmd 1.25
79 TestFunctional/parallel/ConfigCmd 0.5
80 TestFunctional/parallel/DashboardCmd 30.46
81 TestFunctional/parallel/DryRun 0.52
82 TestFunctional/parallel/InternationalLanguage 0.23
83 TestFunctional/parallel/StatusCmd 1.21
86 TestFunctional/parallel/ServiceCmd 13.27
87 TestFunctional/parallel/ServiceCmdConnect 12.59
88 TestFunctional/parallel/AddonsCmd 0.17
89 TestFunctional/parallel/PersistentVolumeClaim 41.37
91 TestFunctional/parallel/SSHCmd 0.81
92 TestFunctional/parallel/CpCmd 1.66
93 TestFunctional/parallel/MySQL 27.85
94 TestFunctional/parallel/FileSync 0.37
95 TestFunctional/parallel/CertSync 2.29
99 TestFunctional/parallel/NodeLabels 0.06
101 TestFunctional/parallel/NonActiveRuntimeDisabled 0.45
103 TestFunctional/parallel/ProfileCmd/profile_not_create 0.6
104 TestFunctional/parallel/Version/short 0.06
105 TestFunctional/parallel/Version/components 1.17
107 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
109 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.27
110 TestFunctional/parallel/ProfileCmd/profile_list 0.46
111 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
112 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
113 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
117 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
118 TestFunctional/parallel/DockerEnv/bash 1.54
119 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
120 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
121 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
122 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
123 TestFunctional/parallel/ImageCommands/ImageBuild 4.14
124 TestFunctional/parallel/ImageCommands/Setup 1.34
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.78
126 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
127 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
128 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
129 TestFunctional/parallel/MountCmd/any-port 5.79
130 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.05
131 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.03
132 TestFunctional/parallel/MountCmd/specific-port 2.3
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.21
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.64
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.11
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 3.59
137 TestFunctional/delete_addon-resizer_images 0.1
138 TestFunctional/delete_my-image_image 0.03
139 TestFunctional/delete_minikube_cached_images 0.03
142 TestIngressAddonLegacy/StartLegacyK8sCluster 54.6
144 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.6
145 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.35
146 TestIngressAddonLegacy/serial/ValidateIngressAddons 36.08
149 TestJSONOutput/start/Command 40.67
150 TestJSONOutput/start/Audit 0
152 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
153 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
155 TestJSONOutput/pause/Command 0.66
156 TestJSONOutput/pause/Audit 0
158 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
161 TestJSONOutput/unpause/Command 0.58
162 TestJSONOutput/unpause/Audit 0
164 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/stop/Command 10.93
168 TestJSONOutput/stop/Audit 0
170 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
172 TestErrorJSONOutput 0.26
174 TestKicCustomNetwork/create_custom_network 27.11
175 TestKicCustomNetwork/use_default_bridge_network 26.72
176 TestKicExistingNetwork 26.64
177 TestKicCustomSubnet 26.21
178 TestMainNoArgs 0.06
181 TestMountStart/serial/StartWithMountFirst 5.65
182 TestMountStart/serial/VerifyMountFirst 0.32
183 TestMountStart/serial/StartWithMountSecond 5.25
184 TestMountStart/serial/VerifyMountSecond 0.32
185 TestMountStart/serial/DeleteFirst 1.73
186 TestMountStart/serial/VerifyMountPostDelete 0.32
187 TestMountStart/serial/Stop 1.25
188 TestMountStart/serial/RestartStopped 6.61
189 TestMountStart/serial/VerifyMountPostStop 0.32
192 TestMultiNode/serial/FreshStart2Nodes 81.63
193 TestMultiNode/serial/DeployApp2Nodes 3.92
194 TestMultiNode/serial/PingHostFrom2Pods 0.82
195 TestMultiNode/serial/AddNode 25.5
196 TestMultiNode/serial/ProfileList 0.34
197 TestMultiNode/serial/CopyFile 11.49
198 TestMultiNode/serial/StopNode 2.42
199 TestMultiNode/serial/StartAfterStop 25.04
200 TestMultiNode/serial/RestartKeepsNodes 101.4
201 TestMultiNode/serial/DeleteNode 5.16
202 TestMultiNode/serial/StopMultiNode 21.73
203 TestMultiNode/serial/RestartMultiNode 84.16
204 TestMultiNode/serial/ValidateNameConflict 27.42
209 TestPreload 107.56
211 TestScheduledStopUnix 98.27
212 TestSkaffold 54.53
214 TestInsufficientStorage 12.89
215 TestRunningBinaryUpgrade 67
217 TestKubernetesUpgrade 79.67
218 TestMissingContainerUpgrade 111
220 TestStoppedBinaryUpgrade/Setup 0.46
221 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
222 TestNoKubernetes/serial/StartWithK8s 47.35
223 TestStoppedBinaryUpgrade/Upgrade 71.71
224 TestNoKubernetes/serial/StartWithStopK8s 15.6
225 TestNoKubernetes/serial/Start 202.02
226 TestStoppedBinaryUpgrade/MinikubeLogs 1.5
246 TestPause/serial/Start 42.07
247 TestNoKubernetes/serial/VerifyK8sNotRunning 0.34
248 TestNoKubernetes/serial/ProfileList 1.43
249 TestNoKubernetes/serial/Stop 1.27
250 TestNoKubernetes/serial/StartNoArgs 5.82
251 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.33
252 TestPause/serial/SecondStartNoReconfiguration 6.45
253 TestPause/serial/Pause 0.64
254 TestPause/serial/VerifyStatus 0.37
255 TestPause/serial/Unpause 0.57
256 TestPause/serial/PauseAgain 0.85
257 TestPause/serial/DeletePaused 2.45
258 TestPause/serial/VerifyDeletedResources 0.68
259 TestNetworkPlugins/group/auto/Start 57.96
260 TestNetworkPlugins/group/false/Start 51.9
261 TestNetworkPlugins/group/cilium/Start 90.33
263 TestNetworkPlugins/group/auto/KubeletFlags 0.4
264 TestNetworkPlugins/group/auto/NetCatPod 14.23
265 TestNetworkPlugins/group/false/KubeletFlags 0.36
266 TestNetworkPlugins/group/false/NetCatPod 11.37
267 TestNetworkPlugins/group/auto/DNS 0.18
268 TestNetworkPlugins/group/auto/Localhost 0.13
269 TestNetworkPlugins/group/auto/HairPin 5.14
270 TestNetworkPlugins/group/false/DNS 0.19
271 TestNetworkPlugins/group/false/Localhost 0.15
272 TestNetworkPlugins/group/false/HairPin 5.16
274 TestNetworkPlugins/group/enable-default-cni/Start 292.32
275 TestNetworkPlugins/group/cilium/ControllerPod 5.02
276 TestNetworkPlugins/group/cilium/KubeletFlags 0.39
277 TestNetworkPlugins/group/cilium/NetCatPod 11.98
278 TestNetworkPlugins/group/cilium/DNS 0.14
279 TestNetworkPlugins/group/cilium/Localhost 0.12
280 TestNetworkPlugins/group/cilium/HairPin 0.13
281 TestNetworkPlugins/group/kindnet/Start 55.62
282 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
283 TestNetworkPlugins/group/kindnet/KubeletFlags 0.38
284 TestNetworkPlugins/group/kindnet/NetCatPod 9.24
286 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.43
287 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.19
289 TestNetworkPlugins/group/bridge/Start 44
290 TestNetworkPlugins/group/kubenet/Start 291.41
292 TestStartStop/group/old-k8s-version/serial/FirstStart 315.51
293 TestNetworkPlugins/group/bridge/KubeletFlags 0.34
294 TestNetworkPlugins/group/bridge/NetCatPod 9.17
297 TestStartStop/group/no-preload/serial/FirstStart 303.78
298 TestNetworkPlugins/group/kubenet/KubeletFlags 0.34
299 TestNetworkPlugins/group/kubenet/NetCatPod 9.22
301 TestStartStop/group/old-k8s-version/serial/DeployApp 9.32
303 TestStartStop/group/embed-certs/serial/FirstStart 38.9
304 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.87
305 TestStartStop/group/old-k8s-version/serial/Stop 11.1
306 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
307 TestStartStop/group/old-k8s-version/serial/SecondStart 562.47
308 TestStartStop/group/embed-certs/serial/DeployApp 7.27
309 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.58
310 TestStartStop/group/embed-certs/serial/Stop 10.83
311 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
312 TestStartStop/group/embed-certs/serial/SecondStart 338.44
313 TestStartStop/group/no-preload/serial/DeployApp 9.32
314 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.57
315 TestStartStop/group/no-preload/serial/Stop 10.92
316 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
317 TestStartStop/group/no-preload/serial/SecondStart 594.95
319 TestStartStop/group/default-k8s-different-port/serial/FirstStart 289.22
320 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 14.02
321 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.19
322 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.39
323 TestStartStop/group/embed-certs/serial/Pause 3.13
325 TestStartStop/group/newest-cni/serial/FirstStart 39.7
326 TestStartStop/group/newest-cni/serial/DeployApp 0
327 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.65
328 TestStartStop/group/newest-cni/serial/Stop 10.89
329 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
330 TestStartStop/group/newest-cni/serial/SecondStart 19.23
331 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
333 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.37
334 TestStartStop/group/newest-cni/serial/Pause 2.91
335 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
336 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.18
337 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.36
338 TestStartStop/group/old-k8s-version/serial/Pause 2.97
339 TestStartStop/group/default-k8s-different-port/serial/DeployApp 7.38
340 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.58
341 TestStartStop/group/default-k8s-different-port/serial/Stop 10.81
342 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.19
343 TestStartStop/group/default-k8s-different-port/serial/SecondStart 570.24
344 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.01
345 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.18
346 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.35
347 TestStartStop/group/no-preload/serial/Pause 2.89
348 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 5.01
349 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.06
350 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.35
351 TestStartStop/group/default-k8s-different-port/serial/Pause 2.88
x
+
TestDownloadOnly/v1.16.0/json-events (9.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220412191244-177186 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220412191244-177186 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (9.957302276s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (9.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220412191244-177186
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220412191244-177186: exit status 85 (67.51759ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/04/12 19:12:44
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.18 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0412 19:12:44.076739  177198 out.go:297] Setting OutFile to fd 1 ...
	I0412 19:12:44.076888  177198 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 19:12:44.076902  177198 out.go:310] Setting ErrFile to fd 2...
	I0412 19:12:44.076911  177198 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 19:12:44.077038  177198 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	W0412 19:12:44.077158  177198 root.go:300] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/config/config.json: no such file or directory
	I0412 19:12:44.077371  177198 out.go:304] Setting JSON to true
	I0412 19:12:44.078389  177198 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":6917,"bootTime":1649783847,"procs":405,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0412 19:12:44.078448  177198 start.go:125] virtualization: kvm guest
	W0412 19:12:44.081205  177198 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball: no such file or directory
	I0412 19:12:44.081222  177198 notify.go:193] Checking for updates...
	I0412 19:12:44.083193  177198 driver.go:346] Setting default libvirt URI to qemu:///system
	I0412 19:12:44.121348  177198 docker.go:137] docker version: linux-20.10.14
	I0412 19:12:44.121427  177198 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 19:12:44.206022  177198 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:100 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:42 SystemTime:2022-04-12 19:12:44.147255208 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSe
rverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 19:12:44.206123  177198 docker.go:254] overlay module found
	I0412 19:12:44.208257  177198 start.go:284] selected driver: docker
	I0412 19:12:44.208271  177198 start.go:801] validating driver "docker" against <nil>
	I0412 19:12:44.208435  177198 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 19:12:44.292321  177198 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:100 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:42 SystemTime:2022-04-12 19:12:44.234577818 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSe
rverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 19:12:44.292431  177198 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0412 19:12:44.292834  177198 start_flags.go:373] Using suggested 8000MB memory alloc based on sys=32103MB, container=32103MB
	I0412 19:12:44.292944  177198 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0412 19:12:44.293119  177198 cni.go:93] Creating CNI manager for ""
	I0412 19:12:44.293126  177198 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0412 19:12:44.293136  177198 start_flags.go:306] config:
	{Name:download-only-20220412191244-177186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220412191244-177186 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 19:12:44.295201  177198 cache.go:120] Beginning downloading kic base image for docker with docker
	I0412 19:12:44.296565  177198 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0412 19:12:44.296596  177198 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0412 19:12:44.340891  177198 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0412 19:12:44.340914  177198 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 to local cache
	I0412 19:12:44.341221  177198 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local cache directory
	I0412 19:12:44.341304  177198 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 to local cache
	I0412 19:12:44.350641  177198 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0412 19:12:44.350663  177198 cache.go:57] Caching tarball of preloaded images
	I0412 19:12:44.350781  177198 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0412 19:12:44.352958  177198 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0412 19:12:44.412637  177198 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0412 19:12:47.119949  177198 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0412 19:12:47.120025  177198 preload.go:256] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0412 19:12:47.800888  177198 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0412 19:12:47.801235  177198 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/download-only-20220412191244-177186/config.json ...
	I0412 19:12:47.801272  177198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/download-only-20220412191244-177186/config.json: {Name:mk4674a78fe180b652bf6cf496107c41be450b2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:12:47.801457  177198 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0412 19:12:47.801636  177198 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220412191244-177186"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/json-events (4.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220412191244-177186 --force --alsologtostderr --kubernetes-version=v1.23.5 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220412191244-177186 --force --alsologtostderr --kubernetes-version=v1.23.5 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.100159612s)
--- PASS: TestDownloadOnly/v1.23.5/json-events (4.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/preload-exists
--- PASS: TestDownloadOnly/v1.23.5/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220412191244-177186
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220412191244-177186: exit status 85 (68.157754ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/04/12 19:12:54
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.18 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220412191244-177186"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.5/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/json-events (4.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220412191244-177186 --force --alsologtostderr --kubernetes-version=v1.23.6-rc.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220412191244-177186 --force --alsologtostderr --kubernetes-version=v1.23.6-rc.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.129388208s)
--- PASS: TestDownloadOnly/v1.23.6-rc.0/json-events (4.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.23.6-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220412191244-177186
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220412191244-177186: exit status 85 (69.422278ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/04/12 19:12:58
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.18 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220412191244-177186"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.6-rc.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.32s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20220412191244-177186
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.20s)

                                                
                                    
x
+
TestDownloadOnlyKic (2.59s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-20220412191303-177186 --force --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:228: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-20220412191303-177186 --force --alsologtostderr --driver=docker  --container-runtime=docker: (1.607318422s)
helpers_test.go:175: Cleaning up "download-docker-20220412191303-177186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-20220412191303-177186
--- PASS: TestDownloadOnlyKic (2.59s)

                                                
                                    
x
+
TestBinaryMirror (0.83s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-20220412191305-177186 --alsologtostderr --binary-mirror http://127.0.0.1:41923 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-20220412191305-177186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-20220412191305-177186
--- PASS: TestBinaryMirror (0.83s)

                                                
                                    
x
+
TestOffline (64.84s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-20220412193516-177186 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-20220412193516-177186 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m2.203403783s)
helpers_test.go:175: Cleaning up "offline-docker-20220412193516-177186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-20220412193516-177186

                                                
                                                
=== CONT  TestOffline
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-20220412193516-177186: (2.639454986s)
--- PASS: TestOffline (64.84s)

                                                
                                    
x
+
TestAddons/Setup (93.79s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:75: (dbg) Run:  out/minikube-linux-amd64 start -p addons-20220412191306-177186 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:75: (dbg) Done: out/minikube-linux-amd64 start -p addons-20220412191306-177186 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (1m33.790282358s)
--- PASS: TestAddons/Setup (93.79s)

                                                
                                    
x
+
TestAddons/parallel/Registry (12.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:280: registry stabilized in 9.571833ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-phglr" [9d1387bf-7361-44ac-95e5-58181da0f8de] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.009048653s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-mcrlb" [349cd45a-816e-4e65-b7c2-66b15168ca89] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00704048s
addons_test.go:290: (dbg) Run:  kubectl --context addons-20220412191306-177186 delete po -l run=registry-test --now
addons_test.go:295: (dbg) Run:  kubectl --context addons-20220412191306-177186 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:295: (dbg) Done: kubectl --context addons-20220412191306-177186 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (1.90000012s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220412191306-177186 ip
2022/04/12 19:14:52 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:338: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220412191306-177186 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (12.57s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (37.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:162: (dbg) Run:  kubectl --context addons-20220412191306-177186 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:162: (dbg) Done: kubectl --context addons-20220412191306-177186 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (16.856414766s)
addons_test.go:182: (dbg) Run:  kubectl --context addons-20220412191306-177186 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:195: (dbg) Run:  kubectl --context addons-20220412191306-177186 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:200: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [d3676281-09d5-41fe-bdd0-51a0a9ba6004] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [d3676281-09d5-41fe-bdd0-51a0a9ba6004] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:200: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.00823845s
addons_test.go:212: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220412191306-177186 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:236: (dbg) Run:  kubectl --context addons-20220412191306-177186 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220412191306-177186 ip
addons_test.go:247: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220412191306-177186 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-20220412191306-177186 addons disable ingress-dns --alsologtostderr -v=1: (1.425337298s)
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220412191306-177186 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:261: (dbg) Done: out/minikube-linux-amd64 -p addons-20220412191306-177186 addons disable ingress --alsologtostderr -v=1: (7.491227049s)
--- PASS: TestAddons/parallel/Ingress (37.08s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.57s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:357: metrics-server stabilized in 9.838497ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:342: "metrics-server-bd6f4dd56-jqk5b" [d8c493a6-1e6b-410b-acab-68c4d24fa2af] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009767018s
addons_test.go:365: (dbg) Run:  kubectl --context addons-20220412191306-177186 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220412191306-177186 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.57s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.26s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:406: tiller-deploy stabilized in 9.909737ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:342: "tiller-deploy-6d67d5465d-4lxrk" [b8055335-4f8b-4912-92b0-75c0334e16d7] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.009657808s

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: (dbg) Run:  kubectl --context addons-20220412191306-177186 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: (dbg) Done: kubectl --context addons-20220412191306-177186 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.902814218s)
addons_test.go:440: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220412191306-177186 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.26s)

                                                
                                    
x
+
TestAddons/parallel/CSI (41.71s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:511: csi-hostpath-driver pods stabilized in 5.172031ms
addons_test.go:514: (dbg) Run:  kubectl --context addons-20220412191306-177186 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:519: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220412191306-177186 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:524: (dbg) Run:  kubectl --context addons-20220412191306-177186 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:529: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [60c37eae-9bed-41fd-a3b3-15276cf757a5] Pending
helpers_test.go:342: "task-pv-pod" [60c37eae-9bed-41fd-a3b3-15276cf757a5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [60c37eae-9bed-41fd-a3b3-15276cf757a5] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:529: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 18.006227065s
addons_test.go:534: (dbg) Run:  kubectl --context addons-20220412191306-177186 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:539: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220412191306-177186 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220412191306-177186 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:544: (dbg) Run:  kubectl --context addons-20220412191306-177186 delete pod task-pv-pod
addons_test.go:550: (dbg) Run:  kubectl --context addons-20220412191306-177186 delete pvc hpvc
addons_test.go:556: (dbg) Run:  kubectl --context addons-20220412191306-177186 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:561: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220412191306-177186 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:566: (dbg) Run:  kubectl --context addons-20220412191306-177186 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [f0865135-c329-4161-bc5d-8500e5d61b78] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [f0865135-c329-4161-bc5d-8500e5d61b78] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [f0865135-c329-4161-bc5d-8500e5d61b78] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:571: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 12.007901085s
addons_test.go:576: (dbg) Run:  kubectl --context addons-20220412191306-177186 delete pod task-pv-pod-restore
addons_test.go:576: (dbg) Done: kubectl --context addons-20220412191306-177186 delete pod task-pv-pod-restore: (1.077123349s)
addons_test.go:580: (dbg) Run:  kubectl --context addons-20220412191306-177186 delete pvc hpvc-restore
addons_test.go:584: (dbg) Run:  kubectl --context addons-20220412191306-177186 delete volumesnapshot new-snapshot-demo
addons_test.go:588: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220412191306-177186 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:588: (dbg) Done: out/minikube-linux-amd64 -p addons-20220412191306-177186 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.8828877s)
addons_test.go:592: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220412191306-177186 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (41.71s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (35.81s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:603: (dbg) Run:  kubectl --context addons-20220412191306-177186 create -f testdata/busybox.yaml
addons_test.go:609: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [79818d8b-be82-4037-b198-777e47254fa1] Pending
helpers_test.go:342: "busybox" [79818d8b-be82-4037-b198-777e47254fa1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [79818d8b-be82-4037-b198-777e47254fa1] Running
addons_test.go:609: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 8.006403849s
addons_test.go:615: (dbg) Run:  kubectl --context addons-20220412191306-177186 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:652: (dbg) Run:  kubectl --context addons-20220412191306-177186 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:665: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220412191306-177186 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:665: (dbg) Done: out/minikube-linux-amd64 -p addons-20220412191306-177186 addons disable gcp-auth --alsologtostderr -v=1: (5.73444006s)
addons_test.go:681: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220412191306-177186 addons enable gcp-auth
addons_test.go:687: (dbg) Run:  kubectl --context addons-20220412191306-177186 apply -f testdata/private-image.yaml
addons_test.go:694: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:342: "private-image-7f8587d5b7-6n5zn" [8b838068-2ebf-46f2-ab33-067674daf176] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])
helpers_test.go:342: "private-image-7f8587d5b7-6n5zn" [8b838068-2ebf-46f2-ab33-067674daf176] Running
addons_test.go:694: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image healthy within 10.00746532s
addons_test.go:700: (dbg) Run:  kubectl --context addons-20220412191306-177186 apply -f testdata/private-image-eu.yaml
addons_test.go:705: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:342: "private-image-eu-869dcfd8c7-pfx4t" [0444c1a5-107d-465d-82e8-e75838584cc2] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])
helpers_test.go:342: "private-image-eu-869dcfd8c7-pfx4t" [0444c1a5-107d-465d-82e8-e75838584cc2] Running
addons_test.go:705: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image-eu healthy within 10.006614496s
--- PASS: TestAddons/serial/GCPAuth (35.81s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.06s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:132: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-20220412191306-177186
addons_test.go:132: (dbg) Done: out/minikube-linux-amd64 stop -p addons-20220412191306-177186: (10.879307679s)
addons_test.go:136: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-20220412191306-177186
addons_test.go:140: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-20220412191306-177186
--- PASS: TestAddons/StoppedEnableDisable (11.06s)

                                                
                                    
x
+
TestCertOptions (32.46s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20220412193953-177186 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
E0412 19:39:54.111506  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/skaffold-20220412193409-177186/client.crt: no such file or directory
E0412 19:39:56.672084  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/skaffold-20220412193409-177186/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20220412193953-177186 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (29.310584986s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20220412193953-177186 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-20220412193953-177186 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-20220412193953-177186 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-20220412193953-177186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20220412193953-177186
E0412 19:40:24.187153  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412191917-177186/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20220412193953-177186: (2.410192527s)
--- PASS: TestCertOptions (32.46s)

                                                
                                    
x
+
TestCertExpiration (220.71s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220412193707-177186 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220412193707-177186 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (33.849050323s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220412193707-177186 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220412193707-177186 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (4.221502206s)
helpers_test.go:175: Cleaning up "cert-expiration-20220412193707-177186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-20220412193707-177186
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-20220412193707-177186: (2.638511404s)
--- PASS: TestCertExpiration (220.71s)

                                                
                                    
x
+
TestDockerFlags (27.61s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-20220412193741-177186 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-20220412193741-177186 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (24.60898449s)
docker_test.go:50: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-20220412193741-177186 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-20220412193741-177186 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-20220412193741-177186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-20220412193741-177186
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-20220412193741-177186: (2.323396259s)
--- PASS: TestDockerFlags (27.61s)

                                                
                                    
x
+
TestForceSystemdFlag (28.79s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20220412193632-177186 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20220412193632-177186 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (25.957267154s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-20220412193632-177186 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-20220412193632-177186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20220412193632-177186
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20220412193632-177186: (2.410302344s)
--- PASS: TestForceSystemdFlag (28.79s)

                                                
                                    
x
+
TestForceSystemdEnv (240.81s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20220412193705-177186 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20220412193705-177186 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (3m57.774127435s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-20220412193705-177186 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-20220412193705-177186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20220412193705-177186
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20220412193705-177186: (2.596485731s)
--- PASS: TestForceSystemdEnv (240.81s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.88s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.88s)

                                                
                                    
x
+
TestErrorSpam/setup (24.07s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20220412191617-177186 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220412191617-177186 --driver=docker  --container-runtime=docker
error_spam_test.go:78: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20220412191617-177186 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220412191617-177186 --driver=docker  --container-runtime=docker: (24.071452888s)
error_spam_test.go:88: acceptable stderr: "! Your cgroup does not allow setting memory."
--- PASS: TestErrorSpam/setup (24.07s)

                                                
                                    
x
+
TestErrorSpam/start (0.86s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220412191617-177186 --log_dir /tmp/nospam-20220412191617-177186 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220412191617-177186 --log_dir /tmp/nospam-20220412191617-177186 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220412191617-177186 --log_dir /tmp/nospam-20220412191617-177186 start --dry-run
--- PASS: TestErrorSpam/start (0.86s)

                                                
                                    
x
+
TestErrorSpam/status (1.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220412191617-177186 --log_dir /tmp/nospam-20220412191617-177186 status
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220412191617-177186 --log_dir /tmp/nospam-20220412191617-177186 status
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220412191617-177186 --log_dir /tmp/nospam-20220412191617-177186 status
--- PASS: TestErrorSpam/status (1.09s)

                                                
                                    
x
+
TestErrorSpam/pause (1.39s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220412191617-177186 --log_dir /tmp/nospam-20220412191617-177186 pause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220412191617-177186 --log_dir /tmp/nospam-20220412191617-177186 pause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220412191617-177186 --log_dir /tmp/nospam-20220412191617-177186 pause
--- PASS: TestErrorSpam/pause (1.39s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220412191617-177186 --log_dir /tmp/nospam-20220412191617-177186 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220412191617-177186 --log_dir /tmp/nospam-20220412191617-177186 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220412191617-177186 --log_dir /tmp/nospam-20220412191617-177186 unpause
--- PASS: TestErrorSpam/unpause (1.45s)

                                                
                                    
x
+
TestErrorSpam/stop (10.95s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220412191617-177186 --log_dir /tmp/nospam-20220412191617-177186 stop
error_spam_test.go:156: (dbg) Done: out/minikube-linux-amd64 -p nospam-20220412191617-177186 --log_dir /tmp/nospam-20220412191617-177186 stop: (10.699920354s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220412191617-177186 --log_dir /tmp/nospam-20220412191617-177186 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220412191617-177186 --log_dir /tmp/nospam-20220412191617-177186 stop
--- PASS: TestErrorSpam/stop (10.95s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1784: local sync path: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/test/nested/copy/177186/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (40.54s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2163: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220412191658-177186 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2163: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220412191658-177186 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (40.543811633s)
--- PASS: TestFunctional/serial/StartWithProxy (40.54s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.23s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:654: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220412191658-177186 --alsologtostderr -v=8
functional_test.go:654: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220412191658-177186 --alsologtostderr -v=8: (5.22538888s)
functional_test.go:658: soft start took 5.22602074s for "functional-20220412191658-177186" cluster.
--- PASS: TestFunctional/serial/SoftStart (5.23s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:676: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:691: (dbg) Run:  kubectl --context functional-20220412191658-177186 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.75s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 cache add k8s.gcr.io/pause:3.1
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 cache add k8s.gcr.io/pause:3.3
functional_test.go:1044: (dbg) Done: out/minikube-linux-amd64 -p functional-20220412191658-177186 cache add k8s.gcr.io/pause:3.3: (1.256542603s)
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 cache add k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.75s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1072: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220412191658-177186 /tmp/TestFunctionalserialCacheCmdcacheadd_local1579187634/001
functional_test.go:1084: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 cache add minikube-local-cache-test:functional-20220412191658-177186
functional_test.go:1084: (dbg) Done: out/minikube-linux-amd64 -p functional-20220412191658-177186 cache add minikube-local-cache-test:functional-20220412191658-177186: (1.185709428s)
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 cache delete minikube-local-cache-test:functional-20220412191658-177186
functional_test.go:1078: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220412191658-177186
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1097: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1105: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1142: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1148: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1148: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220412191658-177186 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (334.95912ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 cache reload
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:711: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 kubectl -- --context functional-20220412191658-177186 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:736: (dbg) Run:  out/kubectl --context functional-20220412191658-177186 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (26.88s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:752: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220412191658-177186 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:752: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220412191658-177186 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (26.882197563s)
functional_test.go:756: restart took 26.882295804s for "functional-20220412191658-177186" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (26.88s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:805: (dbg) Run:  kubectl --context functional-20220412191658-177186 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:820: etcd phase: Running
functional_test.go:830: etcd status: Ready
functional_test.go:820: kube-apiserver phase: Running
functional_test.go:830: kube-apiserver status: Ready
functional_test.go:820: kube-controller-manager phase: Running
functional_test.go:830: kube-controller-manager status: Ready
functional_test.go:820: kube-scheduler phase: Running
functional_test.go:830: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1231: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 logs
functional_test.go:1231: (dbg) Done: out/minikube-linux-amd64 -p functional-20220412191658-177186 logs: (1.21905787s)
--- PASS: TestFunctional/serial/LogsCmd (1.22s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1245: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 logs --file /tmp/TestFunctionalserialLogsFileCmd1208687149/001/logs.txt
functional_test.go:1245: (dbg) Done: out/minikube-linux-amd64 -p functional-20220412191658-177186 logs --file /tmp/TestFunctionalserialLogsFileCmd1208687149/001/logs.txt: (1.249458727s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220412191658-177186 config get cpus: exit status 14 (82.140276ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 config set cpus 2

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 config get cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220412191658-177186 config get cpus: exit status 14 (76.924698ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (30.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:900: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220412191658-177186 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220412191658-177186 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 215474: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (30.46s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:969: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220412191658-177186 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:969: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220412191658-177186 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (207.980404ms)

                                                
                                                
-- stdout --
	* [functional-20220412191658-177186] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13812
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0412 19:18:38.662281  213429 out.go:297] Setting OutFile to fd 1 ...
	I0412 19:18:38.662388  213429 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 19:18:38.662406  213429 out.go:310] Setting ErrFile to fd 2...
	I0412 19:18:38.662411  213429 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 19:18:38.662528  213429 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0412 19:18:38.662744  213429 out.go:304] Setting JSON to false
	I0412 19:18:38.663769  213429 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":7272,"bootTime":1649783847,"procs":414,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0412 19:18:38.663830  213429 start.go:125] virtualization: kvm guest
	I0412 19:18:38.667390  213429 out.go:176] * [functional-20220412191658-177186] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0412 19:18:38.668990  213429 out.go:176]   - MINIKUBE_LOCATION=13812
	I0412 19:18:38.670288  213429 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0412 19:18:38.671627  213429 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 19:18:38.672939  213429 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0412 19:18:38.674316  213429 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0412 19:18:38.674738  213429 config.go:178] Loaded profile config "functional-20220412191658-177186": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0412 19:18:38.675164  213429 driver.go:346] Setting default libvirt URI to qemu:///system
	I0412 19:18:38.717268  213429 docker.go:137] docker version: linux-20.10.14
	I0412 19:18:38.717381  213429 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 19:18:38.804179  213429 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:101 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:48 SystemTime:2022-04-12 19:18:38.744525092 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSe
rverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 19:18:38.804275  213429 docker.go:254] overlay module found
	I0412 19:18:38.806660  213429 out.go:176] * Using the docker driver based on existing profile
	I0412 19:18:38.806687  213429 start.go:284] selected driver: docker
	I0412 19:18:38.806702  213429 start.go:801] validating driver "docker" against &{Name:functional-20220412191658-177186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220412191658-177186 Namespace:default APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:fa
lse registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 19:18:38.806810  213429 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0412 19:18:38.806862  213429 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 19:18:38.806882  213429 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0412 19:18:38.808481  213429 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 19:18:38.810446  213429 out.go:176] 
	W0412 19:18:38.810550  213429 out.go:241] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0412 19:18:38.811905  213429 out.go:176] 

                                                
                                                
** /stderr **
functional_test.go:986: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220412191658-177186 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1015: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220412191658-177186 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1015: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220412191658-177186 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (226.492661ms)

                                                
                                                
-- stdout --
	* [functional-20220412191658-177186] minikube v1.25.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13812
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Utilisation du pilote docker basé sur le profil existant
	  - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0412 19:18:35.947934  212018 out.go:297] Setting OutFile to fd 1 ...
	I0412 19:18:35.948290  212018 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 19:18:35.948302  212018 out.go:310] Setting ErrFile to fd 2...
	I0412 19:18:35.948309  212018 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 19:18:35.948631  212018 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0412 19:18:35.949045  212018 out.go:304] Setting JSON to false
	I0412 19:18:35.950473  212018 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":7269,"bootTime":1649783847,"procs":412,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0412 19:18:35.950547  212018 start.go:125] virtualization: kvm guest
	I0412 19:18:35.953011  212018 out.go:176] * [functional-20220412191658-177186] minikube v1.25.2 sur Ubuntu 20.04 (kvm/amd64)
	I0412 19:18:35.954450  212018 out.go:176]   - MINIKUBE_LOCATION=13812
	I0412 19:18:35.955959  212018 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0412 19:18:35.957438  212018 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 19:18:35.958735  212018 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0412 19:18:35.960162  212018 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0412 19:18:35.960694  212018 config.go:178] Loaded profile config "functional-20220412191658-177186": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0412 19:18:35.961194  212018 driver.go:346] Setting default libvirt URI to qemu:///system
	I0412 19:18:36.001306  212018 docker.go:137] docker version: linux-20.10.14
	I0412 19:18:36.001434  212018 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 19:18:36.093553  212018 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:101 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:48 SystemTime:2022-04-12 19:18:36.0342092 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 19:18:36.093670  212018 docker.go:254] overlay module found
	I0412 19:18:36.096948  212018 out.go:176] * Utilisation du pilote docker basé sur le profil existant
	I0412 19:18:36.096994  212018 start.go:284] selected driver: docker
	I0412 19:18:36.097003  212018 start.go:801] validating driver "docker" against &{Name:functional-20220412191658-177186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220412191658-177186 Namespace:default APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:fa
lse registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 19:18:36.097122  212018 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0412 19:18:36.097153  212018 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 19:18:36.097176  212018 out.go:241] ! Votre groupe de contrôle ne permet pas de définir la mémoire.
	! Votre groupe de contrôle ne permet pas de définir la mémoire.
	I0412 19:18:36.098747  212018 out.go:176]   - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 19:18:36.100495  212018 out.go:176] 
	W0412 19:18:36.100586  212018 out.go:241] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0412 19:18:36.101882  212018 out.go:176] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:849: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:855: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:867: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (13.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1435: (dbg) Run:  kubectl --context functional-20220412191658-177186 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-20220412191658-177186 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-54fbb85-ncrlv" [a274a566-538c-4e0e-a7cb-a9b9c640459b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-54fbb85-ncrlv" [a274a566-538c-4e0e-a7cb-a9b9c640459b] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 11.008203635s
functional_test.go:1451: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1465: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1478: found endpoint: https://192.168.49.2:31777
functional_test.go:1493: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1507: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1513: found endpoint for hello-node: http://192.168.49.2:31777
--- PASS: TestFunctional/parallel/ServiceCmd (13.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1561: (dbg) Run:  kubectl --context functional-20220412191658-177186 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1567: (dbg) Run:  kubectl --context functional-20220412191658-177186 expose deployment hello-node-connect --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1572: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-74cf8bc446-ftv2n" [8e0313da-2d8e-42a1-9b5c-52ac8107b51b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-74cf8bc446-ftv2n" [8e0313da-2d8e-42a1-9b5c-52ac8107b51b] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1572: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.007024103s
functional_test.go:1581: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 service hello-node-connect --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1587: found endpoint for hello-node-connect: http://192.168.49.2:30481
functional_test.go:1607: http://192.168.49.2:30481: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-74cf8bc446-ftv2n

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=172.17.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30481
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.59s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1622: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 addons list
functional_test.go:1634: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (41.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [90da7d15-dae1-438d-908a-496d5d399a5c] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.008566736s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20220412191658-177186 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20220412191658-177186 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220412191658-177186 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220412191658-177186 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [f57856ea-bef6-41a4-9fb5-6cf67e26dd2f] Pending
helpers_test.go:342: "sp-pod" [f57856ea-bef6-41a4-9fb5-6cf67e26dd2f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [f57856ea-bef6-41a4-9fb5-6cf67e26dd2f] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.008452715s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20220412191658-177186 exec sp-pod -- touch /tmp/mount/foo

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20220412191658-177186 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20220412191658-177186 delete -f testdata/storage-provisioner/pod.yaml: (1.202507021s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220412191658-177186 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [4813d15a-3e75-474a-a186-6913556bcb0c] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [4813d15a-3e75-474a-a186-6913556bcb0c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [4813d15a-3e75-474a-a186-6913556bcb0c] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.008102403s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20220412191658-177186 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (41.37s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1657: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1674: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 ssh -n functional-20220412191658-177186 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 cp functional-20220412191658-177186:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3807823067/001/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 ssh -n functional-20220412191658-177186 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1722: (dbg) Run:  kubectl --context functional-20220412191658-177186 replace --force -f testdata/mysql.yaml
functional_test.go:1728: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:342: "mysql-b87c45988-vkq6x" [7e048cbe-4945-4975-a53f-4fef7b1892e9] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-b87c45988-vkq6x" [7e048cbe-4945-4975-a53f-4fef7b1892e9] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-b87c45988-vkq6x" [7e048cbe-4945-4975-a53f-4fef7b1892e9] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1728: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.013751096s
functional_test.go:1736: (dbg) Run:  kubectl --context functional-20220412191658-177186 exec mysql-b87c45988-vkq6x -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1736: (dbg) Non-zero exit: kubectl --context functional-20220412191658-177186 exec mysql-b87c45988-vkq6x -- mysql -ppassword -e "show databases;": exit status 1 (294.294423ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1736: (dbg) Run:  kubectl --context functional-20220412191658-177186 exec mysql-b87c45988-vkq6x -- mysql -ppassword -e "show databases;"
functional_test.go:1736: (dbg) Non-zero exit: kubectl --context functional-20220412191658-177186 exec mysql-b87c45988-vkq6x -- mysql -ppassword -e "show databases;": exit status 1 (292.366899ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1736: (dbg) Run:  kubectl --context functional-20220412191658-177186 exec mysql-b87c45988-vkq6x -- mysql -ppassword -e "show databases;"
functional_test.go:1736: (dbg) Non-zero exit: kubectl --context functional-20220412191658-177186 exec mysql-b87c45988-vkq6x -- mysql -ppassword -e "show databases;": exit status 1 (132.644188ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1736: (dbg) Run:  kubectl --context functional-20220412191658-177186 exec mysql-b87c45988-vkq6x -- mysql -ppassword -e "show databases;"
2022/04/12 19:19:14 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MySQL (27.85s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1858: Checking for existence of /etc/test/nested/copy/177186/hosts within VM
functional_test.go:1860: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 ssh "sudo cat /etc/test/nested/copy/177186/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1865: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1901: Checking for existence of /etc/ssl/certs/177186.pem within VM
functional_test.go:1902: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 ssh "sudo cat /etc/ssl/certs/177186.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1901: Checking for existence of /usr/share/ca-certificates/177186.pem within VM
functional_test.go:1902: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 ssh "sudo cat /usr/share/ca-certificates/177186.pem"
functional_test.go:1901: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1902: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1928: Checking for existence of /etc/ssl/certs/1771862.pem within VM
functional_test.go:1929: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 ssh "sudo cat /etc/ssl/certs/1771862.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1928: Checking for existence of /usr/share/ca-certificates/1771862.pem within VM
functional_test.go:1929: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 ssh "sudo cat /usr/share/ca-certificates/1771862.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1928: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1929: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20220412191658-177186 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1956: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 ssh "sudo systemctl is-active crio"
functional_test.go:1956: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220412191658-177186 ssh "sudo systemctl is-active crio": exit status 1 (449.98792ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1268: (dbg) Run:  out/minikube-linux-amd64 profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1273: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2185: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2199: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 version -o=json --components
functional_test.go:2199: (dbg) Done: out/minikube-linux-amd64 -p functional-20220412191658-177186 version -o=json --components: (1.16917689s)
--- PASS: TestFunctional/parallel/Version/components (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20220412191658-177186 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20220412191658-177186 apply -f testdata/testsvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [eafae479-8167-48ba-9a8f-affc16a2dd0f] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [eafae479-8167-48ba-9a8f-affc16a2dd0f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [eafae479-8167-48ba-9a8f-affc16a2dd0f] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.053071615s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1308: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1313: Took "389.813166ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1322: (dbg) Run:  out/minikube-linux-amd64 profile list -l

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: Took "72.703852ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1359: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1364: Took "360.590748ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1372: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1377: Took "64.353447ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220412191658-177186 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://10.107.173.223 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-linux-amd64 -p functional-20220412191658-177186 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:494: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20220412191658-177186 docker-env) && out/minikube-linux-amd64 status -p functional-20220412191658-177186"
functional_test.go:517: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20220412191658-177186 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 image ls --format short
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220412191658-177186 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.23.5
k8s.gcr.io/kube-proxy:v1.23.5
k8s.gcr.io/kube-controller-manager:v1.23.5
k8s.gcr.io/kube-apiserver:v1.23.5
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20220412191658-177186
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20220412191658-177186
docker.io/kubernetesui/metrics-scraper:<none>
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 image ls --format table
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220412191658-177186 image ls --format table:
|---------------------------------------------|----------------------------------|---------------|--------|
|                    Image                    |               Tag                |   Image ID    |  Size  |
|---------------------------------------------|----------------------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-20220412191658-177186 | c7f3aaa982f9f | 30B    |
| docker.io/library/nginx                     | alpine                           | 51696c87e77e4 | 23.4MB |
| docker.io/library/nginx                     | latest                           | 12766a6745eea | 142MB  |
| k8s.gcr.io/kube-apiserver                   | v1.23.5                          | 3fc1d62d65872 | 135MB  |
| gcr.io/google-containers/addon-resizer      | functional-20220412191658-177186 | ffd4cfbbe753e | 32.9MB |
| k8s.gcr.io/kube-proxy                       | v1.23.5                          | 3c53fa8541f95 | 112MB  |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                           | a4ca41631cc7a | 46.8MB |
| docker.io/kubernetesui/metrics-scraper      | <none>                           | 7801cfc6d5c07 | 34.4MB |
| k8s.gcr.io/pause                            | 3.3                              | 0184c1613d929 | 683kB  |
| k8s.gcr.io/pause                            | latest                           | 350b164e7ae1d | 240kB  |
| docker.io/library/mysql                     | 5.7                              | f26e21ddd20df | 450MB  |
| k8s.gcr.io/pause                            | 3.6                              | 6270bb605e12e | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                     | 56cc512116c8f | 4.4MB  |
| k8s.gcr.io/pause                            | 3.1                              | da86e6ba6ca19 | 742kB  |
| k8s.gcr.io/kube-controller-manager          | v1.23.5                          | b0c9e5e4dbb14 | 125MB  |
| k8s.gcr.io/kube-scheduler                   | v1.23.5                          | 884d49d6d8c9f | 53.5MB |
| k8s.gcr.io/etcd                             | 3.5.1-0                          | 25f8c7f3da61c | 293MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                               | 6e38f40d628db | 31.5MB |
| k8s.gcr.io/echoserver                       | 1.8                              | 82e4c8a736a4f | 95.4MB |
|---------------------------------------------|----------------------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 image ls --format json
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220412191658-177186 image ls --format json:
[{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220412191658-177186"],"size":"32900000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"b0c9e5e4dbb14459edc593b39add54f5497e42d4eecc8d03bee5daf9537b0dae","repoDigests":[],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.23.5"],"size":"125000000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"c7f3aaa982f9ff0aacd2ddc15068a8170880b0197168a0732dcda43a7940017b","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220412191658-177186"],"size":"30"},{"id":"f26e21ddd20df245d88410116241f3eef1ec49ce888856c95b85081a7250183d","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"45000
0000"},{"id":"12766a6745eea133de9fdcd03ff720fa971fdaf21113d4bc72b417c123b15619","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"3c53fa8541f95165d3def81704febb85e2e13f90872667f9939dd856dc88e874","repoDigests":[],"repoTags":["k8s.gcr.io/kube-proxy:v1.23.5"],"size":"112000000"},{"id":"25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d","repoDigests":[],"repoTags":["k8s.gcr.io/etcd:3.5.1-0"],"size":"293000000"},{"id":"7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"34400000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"51696c87e77e4ff7a53af9be837f35d4eacdb47b4ca83ba5fd5e4b5101d9
8502","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23400000"},{"id":"884d49d6d8c9f40672d20c78e300ffee238d01c1ccb2c132937125d97a596fd7","repoDigests":[],"repoTags":["k8s.gcr.io/kube-scheduler:v1.23.5"],"size":"53500000"},{"id":"a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":[],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"46800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"3fc1d62d65872296462b198ab7842d0faf8c336b236c4a0dacfce67bec95257f","repoDigests":[],"repoTags":["k8s.gcr.io/kube-apiserver:v1.23.5"],"size":"135000000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 image ls --format yaml
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220412191658-177186 image ls --format yaml:
- id: 51696c87e77e4ff7a53af9be837f35d4eacdb47b4ca83ba5fd5e4b5101d98502
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23400000"
- id: b0c9e5e4dbb14459edc593b39add54f5497e42d4eecc8d03bee5daf9537b0dae
repoDigests: []
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.23.5
size: "125000000"
- id: a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests: []
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "46800000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 3fc1d62d65872296462b198ab7842d0faf8c336b236c4a0dacfce67bec95257f
repoDigests: []
repoTags:
- k8s.gcr.io/kube-apiserver:v1.23.5
size: "135000000"
- id: 25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d
repoDigests: []
repoTags:
- k8s.gcr.io/etcd:3.5.1-0
size: "293000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220412191658-177186
size: "32900000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: 7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "34400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: c7f3aaa982f9ff0aacd2ddc15068a8170880b0197168a0732dcda43a7940017b
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220412191658-177186
size: "30"
- id: f26e21ddd20df245d88410116241f3eef1ec49ce888856c95b85081a7250183d
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "450000000"
- id: 12766a6745eea133de9fdcd03ff720fa971fdaf21113d4bc72b417c123b15619
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 3c53fa8541f95165d3def81704febb85e2e13f90872667f9939dd856dc88e874
repoDigests: []
repoTags:
- k8s.gcr.io/kube-proxy:v1.23.5
size: "112000000"
- id: 884d49d6d8c9f40672d20c78e300ffee238d01c1ccb2c132937125d97a596fd7
repoDigests: []
repoTags:
- k8s.gcr.io/kube-scheduler:v1.23.5
size: "53500000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 ssh pgrep buildkitd
functional_test.go:303: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220412191658-177186 ssh pgrep buildkitd: exit status 1 (326.34618ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 image build -t localhost/my-image:functional-20220412191658-177186 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p functional-20220412191658-177186 image build -t localhost/my-image:functional-20220412191658-177186 testdata/build: (3.320523583s)
functional_test.go:315: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220412191658-177186 image build -t localhost/my-image:functional-20220412191658-177186 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 35bee238535e
Removing intermediate container 35bee238535e
---> 93a2c8b9c1fb
Step 3/3 : ADD content.txt /
---> 170623245600
Successfully built 170623245600
Successfully tagged localhost/my-image:functional-20220412191658-177186
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.304371559s)
functional_test.go:342: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220412191658-177186
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220412191658-177186

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Done: out/minikube-linux-amd64 -p functional-20220412191658-177186 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220412191658-177186: (3.502450869s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.78s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2048: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2048: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2048: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220412191658-177186 /tmp/TestFunctionalparallelMountCmdany-port1225840112/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1649791116105155199" to /tmp/TestFunctionalparallelMountCmdany-port1225840112/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1649791116105155199" to /tmp/TestFunctionalparallelMountCmdany-port1225840112/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1649791116105155199" to /tmp/TestFunctionalparallelMountCmdany-port1225840112/001/test-1649791116105155199
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220412191658-177186 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (393.354993ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 12 19:18 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 12 19:18 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 12 19:18 test-1649791116105155199
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 ssh cat /mount-9p/test-1649791116105155199

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-20220412191658-177186 replace --force -f testdata/busybox-mount-test.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [e1a21217-840e-4337-be92-2a7c8c931d4c] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [e1a21217-840e-4337-be92-2a7c8c931d4c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [e1a21217-840e-4337-be92-2a7c8c931d4c] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 2.008367426s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-20220412191658-177186 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 ssh stat /mount-9p/created-by-test

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 ssh stat /mount-9p/created-by-pod

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220412191658-177186 /tmp/TestFunctionalparallelMountCmdany-port1225840112/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220412191658-177186

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Done: out/minikube-linux-amd64 -p functional-20220412191658-177186 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220412191658-177186: (2.753585902s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:235: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220412191658-177186

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220412191658-177186

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:240: (dbg) Done: out/minikube-linux-amd64 -p functional-20220412191658-177186 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220412191658-177186: (4.396633712s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220412191658-177186 /tmp/TestFunctionalparallelMountCmdspecific-port3736892568/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220412191658-177186 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (397.158372ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 ssh -- ls -la /mount-9p
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220412191658-177186 /tmp/TestFunctionalparallelMountCmdspecific-port3736892568/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220412191658-177186 ssh "sudo umount -f /mount-9p": exit status 1 (493.408628ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-linux-amd64 -p functional-20220412191658-177186 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220412191658-177186 /tmp/TestFunctionalparallelMountCmdspecific-port3736892568/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 image save gcr.io/google-containers/addon-resizer:functional-20220412191658-177186 /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar
functional_test.go:375: (dbg) Done: out/minikube-linux-amd64 -p functional-20220412191658-177186 image save gcr.io/google-containers/addon-resizer:functional-20220412191658-177186 /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar: (2.211897915s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 image rm gcr.io/google-containers/addon-resizer:functional-20220412191658-177186
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 image load /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220412191658-177186
functional_test.go:419: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412191658-177186 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220412191658-177186
functional_test.go:419: (dbg) Done: out/minikube-linux-amd64 -p functional-20220412191658-177186 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220412191658-177186: (3.523099026s)
functional_test.go:424: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220412191658-177186
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.59s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.1s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220412191658-177186
--- PASS: TestFunctional/delete_addon-resizer_images (0.10s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220412191658-177186
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220412191658-177186
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (54.6s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-20220412191917-177186 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0412 19:19:40.347296  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412191306-177186/client.crt: no such file or directory
E0412 19:19:40.352874  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412191306-177186/client.crt: no such file or directory
E0412 19:19:40.363214  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412191306-177186/client.crt: no such file or directory
E0412 19:19:40.383468  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412191306-177186/client.crt: no such file or directory
E0412 19:19:40.423774  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412191306-177186/client.crt: no such file or directory
E0412 19:19:40.504073  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412191306-177186/client.crt: no such file or directory
E0412 19:19:40.664478  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412191306-177186/client.crt: no such file or directory
E0412 19:19:40.985079  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412191306-177186/client.crt: no such file or directory
E0412 19:19:41.626104  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412191306-177186/client.crt: no such file or directory
E0412 19:19:42.906580  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412191306-177186/client.crt: no such file or directory
E0412 19:19:45.467410  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412191306-177186/client.crt: no such file or directory
E0412 19:19:50.587727  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412191306-177186/client.crt: no such file or directory
E0412 19:20:00.828686  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412191306-177186/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-20220412191917-177186 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (54.598051207s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (54.60s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.6s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220412191917-177186 addons enable ingress --alsologtostderr -v=5
E0412 19:20:21.308934  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412191306-177186/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220412191917-177186 addons enable ingress --alsologtostderr -v=5: (11.597154246s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.60s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.35s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220412191917-177186 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.35s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (36.08s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:162: (dbg) Run:  kubectl --context ingress-addon-legacy-20220412191917-177186 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:162: (dbg) Done: kubectl --context ingress-addon-legacy-20220412191917-177186 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (14.803211092s)
addons_test.go:182: (dbg) Run:  kubectl --context ingress-addon-legacy-20220412191917-177186 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:195: (dbg) Run:  kubectl --context ingress-addon-legacy-20220412191917-177186 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:200: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [a825660d-8328-41ff-b5ec-b90b60f8d2b3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [a825660d-8328-41ff-b5ec-b90b60f8d2b3] Running
addons_test.go:200: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.004526687s
addons_test.go:212: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220412191917-177186 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:236: (dbg) Run:  kubectl --context ingress-addon-legacy-20220412191917-177186 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220412191917-177186 ip
addons_test.go:247: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220412191917-177186 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220412191917-177186 addons disable ingress-dns --alsologtostderr -v=1: (1.802326032s)
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220412191917-177186 addons disable ingress --alsologtostderr -v=1
addons_test.go:261: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220412191917-177186 addons disable ingress --alsologtostderr -v=1: (7.258773641s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (36.08s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.67s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-20220412192103-177186 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-20220412192103-177186 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (40.665261584s)
--- PASS: TestJSONOutput/start/Command (40.67s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-20220412192103-177186 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-20220412192103-177186 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.93s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-20220412192103-177186 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-20220412192103-177186 --output=json --user=testUser: (10.934341308s)
--- PASS: TestJSONOutput/stop/Command (10.93s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20220412192157-177186 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20220412192157-177186 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.189721ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9b2260b1-0517-41e2-86ea-1f60e3bea3c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220412192157-177186] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"243f8a61-5bde-4ed0-b5a6-b232871c0eb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13812"}}
	{"specversion":"1.0","id":"3a04dd55-3245-4df1-b67d-d62b5cae27f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7026a055-6af2-4545-af38-05fa916884e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig"}}
	{"specversion":"1.0","id":"80fb0b3a-7c81-4327-98ab-68ac54c06dee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube"}}
	{"specversion":"1.0","id":"244c0dd8-e791-4d60-bfb5-e483dc4844da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"13b1e10b-edcd-4e39-ba68-0c22f1940f22","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20220412192157-177186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20220412192157-177186
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (27.11s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220412192157-177186 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220412192157-177186 --network=: (24.863105401s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220412192157-177186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220412192157-177186
E0412 19:22:24.189353  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412191306-177186/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220412192157-177186: (2.214957141s)
--- PASS: TestKicCustomNetwork/create_custom_network (27.11s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.72s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220412192225-177186 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220412192225-177186 --network=bridge: (24.633448829s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220412192225-177186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220412192225-177186
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220412192225-177186: (2.052542036s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.72s)

                                                
                                    
x
+
TestKicExistingNetwork (26.64s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-20220412192251-177186 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-20220412192251-177186 --network=existing-network: (24.250312184s)
helpers_test.go:175: Cleaning up "existing-network-20220412192251-177186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-20220412192251-177186
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-20220412192251-177186: (2.178829186s)
--- PASS: TestKicExistingNetwork (26.64s)

                                                
                                    
x
+
TestKicCustomSubnet (26.21s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-20220412192318-177186 --subnet=192.168.60.0/24
E0412 19:23:21.118348  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412191658-177186/client.crt: no such file or directory
E0412 19:23:21.123638  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412191658-177186/client.crt: no such file or directory
E0412 19:23:21.133906  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412191658-177186/client.crt: no such file or directory
E0412 19:23:21.154168  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412191658-177186/client.crt: no such file or directory
E0412 19:23:21.194514  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412191658-177186/client.crt: no such file or directory
E0412 19:23:21.274835  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412191658-177186/client.crt: no such file or directory
E0412 19:23:21.435240  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412191658-177186/client.crt: no such file or directory
E0412 19:23:21.755838  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412191658-177186/client.crt: no such file or directory
E0412 19:23:22.396733  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412191658-177186/client.crt: no such file or directory
E0412 19:23:23.677108  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412191658-177186/client.crt: no such file or directory
E0412 19:23:26.238212  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412191658-177186/client.crt: no such file or directory
E0412 19:23:31.359085  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412191658-177186/client.crt: no such file or directory
E0412 19:23:41.600018  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412191658-177186/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-20220412192318-177186 --subnet=192.168.60.0/24: (23.981440576s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-20220412192318-177186 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-20220412192318-177186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-20220412192318-177186
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-20220412192318-177186: (2.199844467s)
--- PASS: TestKicCustomSubnet (26.21s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.65s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-20220412192344-177186 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-20220412192344-177186 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (4.645472505s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-20220412192344-177186 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.25s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220412192344-177186 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220412192344-177186 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (4.251051613s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.25s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220412192344-177186 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-20220412192344-177186 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-20220412192344-177186 --alsologtostderr -v=5: (1.728059048s)
--- PASS: TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220412192344-177186 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-20220412192344-177186
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-20220412192344-177186: (1.254209387s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.61s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220412192344-177186
E0412 19:24:02.080269  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412191658-177186/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220412192344-177186: (5.613891042s)
--- PASS: TestMountStart/serial/RestartStopped (6.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220412192344-177186 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (81.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220412192408-177186 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0412 19:24:40.347340  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412191306-177186/client.crt: no such file or directory
E0412 19:24:43.040685  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412191658-177186/client.crt: no such file or directory
E0412 19:25:08.030346  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412191306-177186/client.crt: no such file or directory
E0412 19:25:24.187011  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412191917-177186/client.crt: no such file or directory
E0412 19:25:24.192318  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412191917-177186/client.crt: no such file or directory
E0412 19:25:24.202538  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412191917-177186/client.crt: no such file or directory
E0412 19:25:24.222804  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412191917-177186/client.crt: no such file or directory
E0412 19:25:24.263077  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412191917-177186/client.crt: no such file or directory
E0412 19:25:24.343400  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412191917-177186/client.crt: no such file or directory
E0412 19:25:24.504431  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412191917-177186/client.crt: no such file or directory
E0412 19:25:24.824965  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412191917-177186/client.crt: no such file or directory
E0412 19:25:25.465515  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412191917-177186/client.crt: no such file or directory
E0412 19:25:26.746580  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412191917-177186/client.crt: no such file or directory
E0412 19:25:29.306747  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412191917-177186/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220412192408-177186 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m21.080071933s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (81.63s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220412192408-177186 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220412192408-177186 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20220412192408-177186 -- rollout status deployment/busybox: (2.235282886s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220412192408-177186 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220412192408-177186 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220412192408-177186 -- exec busybox-7978565885-6j7cf -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220412192408-177186 -- exec busybox-7978565885-ntnhx -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220412192408-177186 -- exec busybox-7978565885-6j7cf -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220412192408-177186 -- exec busybox-7978565885-ntnhx -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220412192408-177186 -- exec busybox-7978565885-6j7cf -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220412192408-177186 -- exec busybox-7978565885-ntnhx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.92s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220412192408-177186 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220412192408-177186 -- exec busybox-7978565885-6j7cf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220412192408-177186 -- exec busybox-7978565885-6j7cf -- sh -c "ping -c 1 192.168.49.1"
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220412192408-177186 -- exec busybox-7978565885-ntnhx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E0412 19:25:34.427753  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412191917-177186/client.crt: no such file or directory
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220412192408-177186 -- exec busybox-7978565885-ntnhx -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (25.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220412192408-177186 -v 3 --alsologtostderr
E0412 19:25:44.668237  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412191917-177186/client.crt: no such file or directory
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20220412192408-177186 -v 3 --alsologtostderr: (24.771717233s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (25.50s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.34s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 status --output json --alsologtostderr
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 cp testdata/cp-test.txt multinode-20220412192408-177186:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 ssh -n multinode-20220412192408-177186 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 cp multinode-20220412192408-177186:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1076974089/001/cp-test_multinode-20220412192408-177186.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 ssh -n multinode-20220412192408-177186 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 cp multinode-20220412192408-177186:/home/docker/cp-test.txt multinode-20220412192408-177186-m02:/home/docker/cp-test_multinode-20220412192408-177186_multinode-20220412192408-177186-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 ssh -n multinode-20220412192408-177186 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 ssh -n multinode-20220412192408-177186-m02 "sudo cat /home/docker/cp-test_multinode-20220412192408-177186_multinode-20220412192408-177186-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 cp multinode-20220412192408-177186:/home/docker/cp-test.txt multinode-20220412192408-177186-m03:/home/docker/cp-test_multinode-20220412192408-177186_multinode-20220412192408-177186-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 ssh -n multinode-20220412192408-177186 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 ssh -n multinode-20220412192408-177186-m03 "sudo cat /home/docker/cp-test_multinode-20220412192408-177186_multinode-20220412192408-177186-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 cp testdata/cp-test.txt multinode-20220412192408-177186-m02:/home/docker/cp-test.txt
E0412 19:26:04.961577  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412191658-177186/client.crt: no such file or directory
E0412 19:26:05.148872  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412191917-177186/client.crt: no such file or directory
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 ssh -n multinode-20220412192408-177186-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 cp multinode-20220412192408-177186-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1076974089/001/cp-test_multinode-20220412192408-177186-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 ssh -n multinode-20220412192408-177186-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 cp multinode-20220412192408-177186-m02:/home/docker/cp-test.txt multinode-20220412192408-177186:/home/docker/cp-test_multinode-20220412192408-177186-m02_multinode-20220412192408-177186.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 ssh -n multinode-20220412192408-177186-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 ssh -n multinode-20220412192408-177186 "sudo cat /home/docker/cp-test_multinode-20220412192408-177186-m02_multinode-20220412192408-177186.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 cp multinode-20220412192408-177186-m02:/home/docker/cp-test.txt multinode-20220412192408-177186-m03:/home/docker/cp-test_multinode-20220412192408-177186-m02_multinode-20220412192408-177186-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 ssh -n multinode-20220412192408-177186-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 ssh -n multinode-20220412192408-177186-m03 "sudo cat /home/docker/cp-test_multinode-20220412192408-177186-m02_multinode-20220412192408-177186-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 cp testdata/cp-test.txt multinode-20220412192408-177186-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 ssh -n multinode-20220412192408-177186-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 cp multinode-20220412192408-177186-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1076974089/001/cp-test_multinode-20220412192408-177186-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 ssh -n multinode-20220412192408-177186-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 cp multinode-20220412192408-177186-m03:/home/docker/cp-test.txt multinode-20220412192408-177186:/home/docker/cp-test_multinode-20220412192408-177186-m03_multinode-20220412192408-177186.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 ssh -n multinode-20220412192408-177186-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 ssh -n multinode-20220412192408-177186 "sudo cat /home/docker/cp-test_multinode-20220412192408-177186-m03_multinode-20220412192408-177186.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 cp multinode-20220412192408-177186-m03:/home/docker/cp-test.txt multinode-20220412192408-177186-m02:/home/docker/cp-test_multinode-20220412192408-177186-m03_multinode-20220412192408-177186-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 ssh -n multinode-20220412192408-177186-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 ssh -n multinode-20220412192408-177186-m02 "sudo cat /home/docker/cp-test_multinode-20220412192408-177186-m03_multinode-20220412192408-177186-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.49s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220412192408-177186 node stop m03: (1.260389456s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220412192408-177186 status: exit status 7 (589.199142ms)

                                                
                                                
-- stdout --
	multinode-20220412192408-177186
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220412192408-177186-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220412192408-177186-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220412192408-177186 status --alsologtostderr: exit status 7 (572.891324ms)

                                                
                                                
-- stdout --
	multinode-20220412192408-177186
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220412192408-177186-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220412192408-177186-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0412 19:26:13.872790  269102 out.go:297] Setting OutFile to fd 1 ...
	I0412 19:26:13.872911  269102 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 19:26:13.872921  269102 out.go:310] Setting ErrFile to fd 2...
	I0412 19:26:13.872926  269102 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 19:26:13.873061  269102 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0412 19:26:13.873217  269102 out.go:304] Setting JSON to false
	I0412 19:26:13.873238  269102 mustload.go:65] Loading cluster: multinode-20220412192408-177186
	I0412 19:26:13.873552  269102 config.go:178] Loaded profile config "multinode-20220412192408-177186": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0412 19:26:13.873569  269102 status.go:253] checking status of multinode-20220412192408-177186 ...
	I0412 19:26:13.873918  269102 cli_runner.go:164] Run: docker container inspect multinode-20220412192408-177186 --format={{.State.Status}}
	I0412 19:26:13.905290  269102 status.go:328] multinode-20220412192408-177186 host status = "Running" (err=<nil>)
	I0412 19:26:13.905316  269102 host.go:66] Checking if "multinode-20220412192408-177186" exists ...
	I0412 19:26:13.905536  269102 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220412192408-177186
	I0412 19:26:13.935061  269102 host.go:66] Checking if "multinode-20220412192408-177186" exists ...
	I0412 19:26:13.935325  269102 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0412 19:26:13.935361  269102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220412192408-177186
	I0412 19:26:13.965195  269102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49217 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/multinode-20220412192408-177186/id_rsa Username:docker}
	I0412 19:26:14.049510  269102 ssh_runner.go:195] Run: systemctl --version
	I0412 19:26:14.053344  269102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 19:26:14.061936  269102 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 19:26:14.146981  269102 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:100 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:52 SystemTime:2022-04-12 19:26:14.090000602 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSe
rverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 19:26:14.147510  269102 kubeconfig.go:92] found "multinode-20220412192408-177186" server: "https://192.168.49.2:8443"
	I0412 19:26:14.147538  269102 api_server.go:165] Checking apiserver status ...
	I0412 19:26:14.147566  269102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 19:26:14.156489  269102 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1707/cgroup
	I0412 19:26:14.163204  269102 api_server.go:181] apiserver freezer: "5:freezer:/docker/4128addada88cf8ad20f85723542a672183e3a69acc0411b9f63a170f0b8c154/kubepods/burstable/poddd8dbf46c1afdffc4c9c3f1ba7089089/ce9c167a0b3cc4893a618d99ebde118bf021fb93ed04ca1d3c62d67daa08d8be"
	I0412 19:26:14.163252  269102 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4128addada88cf8ad20f85723542a672183e3a69acc0411b9f63a170f0b8c154/kubepods/burstable/poddd8dbf46c1afdffc4c9c3f1ba7089089/ce9c167a0b3cc4893a618d99ebde118bf021fb93ed04ca1d3c62d67daa08d8be/freezer.state
	I0412 19:26:14.169257  269102 api_server.go:203] freezer state: "THAWED"
	I0412 19:26:14.169281  269102 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0412 19:26:14.174010  269102 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0412 19:26:14.174031  269102 status.go:419] multinode-20220412192408-177186 apiserver status = Running (err=<nil>)
	I0412 19:26:14.174040  269102 status.go:255] multinode-20220412192408-177186 status: &{Name:multinode-20220412192408-177186 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0412 19:26:14.174059  269102 status.go:253] checking status of multinode-20220412192408-177186-m02 ...
	I0412 19:26:14.174288  269102 cli_runner.go:164] Run: docker container inspect multinode-20220412192408-177186-m02 --format={{.State.Status}}
	I0412 19:26:14.205272  269102 status.go:328] multinode-20220412192408-177186-m02 host status = "Running" (err=<nil>)
	I0412 19:26:14.205298  269102 host.go:66] Checking if "multinode-20220412192408-177186-m02" exists ...
	I0412 19:26:14.205576  269102 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220412192408-177186-m02
	I0412 19:26:14.234897  269102 host.go:66] Checking if "multinode-20220412192408-177186-m02" exists ...
	I0412 19:26:14.235178  269102 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0412 19:26:14.235220  269102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220412192408-177186-m02
	I0412 19:26:14.265881  269102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49222 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/multinode-20220412192408-177186-m02/id_rsa Username:docker}
	I0412 19:26:14.348955  269102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 19:26:14.357525  269102 status.go:255] multinode-20220412192408-177186-m02 status: &{Name:multinode-20220412192408-177186-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0412 19:26:14.357577  269102 status.go:253] checking status of multinode-20220412192408-177186-m03 ...
	I0412 19:26:14.357894  269102 cli_runner.go:164] Run: docker container inspect multinode-20220412192408-177186-m03 --format={{.State.Status}}
	I0412 19:26:14.388855  269102 status.go:328] multinode-20220412192408-177186-m03 host status = "Stopped" (err=<nil>)
	I0412 19:26:14.388877  269102 status.go:341] host is not running, skipping remaining checks
	I0412 19:26:14.388883  269102 status.go:255] multinode-20220412192408-177186-m03 status: &{Name:multinode-20220412192408-177186-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.42s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (25.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220412192408-177186 node start m03 --alsologtostderr: (24.22453942s)
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (25.04s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (101.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220412192408-177186
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-20220412192408-177186
E0412 19:26:46.110867  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412191917-177186/client.crt: no such file or directory
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-20220412192408-177186: (22.610875923s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220412192408-177186 --wait=true -v=8 --alsologtostderr
E0412 19:28:08.032081  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412191917-177186/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220412192408-177186 --wait=true -v=8 --alsologtostderr: (1m18.670462198s)
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220412192408-177186
--- PASS: TestMultiNode/serial/RestartKeepsNodes (101.40s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 node delete m03
E0412 19:28:21.118633  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412191658-177186/client.crt: no such file or directory
multinode_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220412192408-177186 node delete m03: (4.487026258s)
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 stop
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220412192408-177186 stop: (21.491388636s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220412192408-177186 status: exit status 7 (117.029313ms)

                                                
                                                
-- stdout --
	multinode-20220412192408-177186
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220412192408-177186-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220412192408-177186 status --alsologtostderr: exit status 7 (116.45507ms)

                                                
                                                
-- stdout --
	multinode-20220412192408-177186
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220412192408-177186-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0412 19:28:47.655766  283773 out.go:297] Setting OutFile to fd 1 ...
	I0412 19:28:47.655880  283773 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 19:28:47.655889  283773 out.go:310] Setting ErrFile to fd 2...
	I0412 19:28:47.655894  283773 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 19:28:47.655990  283773 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0412 19:28:47.656130  283773 out.go:304] Setting JSON to false
	I0412 19:28:47.656151  283773 mustload.go:65] Loading cluster: multinode-20220412192408-177186
	I0412 19:28:47.656455  283773 config.go:178] Loaded profile config "multinode-20220412192408-177186": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0412 19:28:47.656470  283773 status.go:253] checking status of multinode-20220412192408-177186 ...
	I0412 19:28:47.656835  283773 cli_runner.go:164] Run: docker container inspect multinode-20220412192408-177186 --format={{.State.Status}}
	I0412 19:28:47.687290  283773 status.go:328] multinode-20220412192408-177186 host status = "Stopped" (err=<nil>)
	I0412 19:28:47.687326  283773 status.go:341] host is not running, skipping remaining checks
	I0412 19:28:47.687342  283773 status.go:255] multinode-20220412192408-177186 status: &{Name:multinode-20220412192408-177186 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0412 19:28:47.687380  283773 status.go:253] checking status of multinode-20220412192408-177186-m02 ...
	I0412 19:28:47.687631  283773 cli_runner.go:164] Run: docker container inspect multinode-20220412192408-177186-m02 --format={{.State.Status}}
	I0412 19:28:47.716495  283773 status.go:328] multinode-20220412192408-177186-m02 host status = "Stopped" (err=<nil>)
	I0412 19:28:47.716517  283773 status.go:341] host is not running, skipping remaining checks
	I0412 19:28:47.716523  283773 status.go:255] multinode-20220412192408-177186-m02 status: &{Name:multinode-20220412192408-177186-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.73s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (84.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220412192408-177186 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0412 19:28:48.802056  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412191658-177186/client.crt: no such file or directory
E0412 19:29:40.348229  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412191306-177186/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220412192408-177186 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m23.474354034s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412192408-177186 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (84.16s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (27.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220412192408-177186
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220412192408-177186-m02 --driver=docker  --container-runtime=docker
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20220412192408-177186-m02 --driver=docker  --container-runtime=docker: exit status 14 (72.848072ms)

                                                
                                                
-- stdout --
	* [multinode-20220412192408-177186-m02] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13812
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220412192408-177186-m02' is duplicated with machine name 'multinode-20220412192408-177186-m02' in profile 'multinode-20220412192408-177186'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220412192408-177186-m03 --driver=docker  --container-runtime=docker
E0412 19:30:24.186684  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412191917-177186/client.crt: no such file or directory
multinode_test.go:458: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220412192408-177186-m03 --driver=docker  --container-runtime=docker: (24.618596601s)
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220412192408-177186
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20220412192408-177186: exit status 80 (330.404397ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220412192408-177186
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220412192408-177186-m03 already exists in multinode-20220412192408-177186-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20220412192408-177186-m03
multinode_test.go:470: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20220412192408-177186-m03: (2.339412198s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (27.42s)

                                                
                                    
x
+
TestPreload (107.56s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220412193043-177186 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.0
E0412 19:30:51.872834  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412191917-177186/client.crt: no such file or directory
preload_test.go:48: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220412193043-177186 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.0: (1m10.79792674s)
preload_test.go:61: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220412193043-177186 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:61: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20220412193043-177186 -- docker pull gcr.io/k8s-minikube/busybox: (1.056969669s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220412193043-177186 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.3
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220412193043-177186 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.3: (33.034503074s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220412193043-177186 -- docker images
helpers_test.go:175: Cleaning up "test-preload-20220412193043-177186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20220412193043-177186
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20220412193043-177186: (2.324872653s)
--- PASS: TestPreload (107.56s)

                                                
                                    
x
+
TestScheduledStopUnix (98.27s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20220412193231-177186 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20220412193231-177186 --memory=2048 --driver=docker  --container-runtime=docker: (24.833149782s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220412193231-177186 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220412193231-177186 -n scheduled-stop-20220412193231-177186
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220412193231-177186 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220412193231-177186 --cancel-scheduled
E0412 19:33:21.117815  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412191658-177186/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220412193231-177186 -n scheduled-stop-20220412193231-177186
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220412193231-177186
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220412193231-177186 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220412193231-177186
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-20220412193231-177186: exit status 7 (89.497389ms)

                                                
                                                
-- stdout --
	scheduled-stop-20220412193231-177186
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220412193231-177186 -n scheduled-stop-20220412193231-177186
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220412193231-177186 -n scheduled-stop-20220412193231-177186: exit status 7 (87.500297ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-20220412193231-177186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20220412193231-177186
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-20220412193231-177186: (1.791468389s)
--- PASS: TestScheduledStopUnix (98.27s)

                                                
                                    
x
+
TestSkaffold (54.53s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:56: (dbg) Run:  /tmp/skaffold.exe2725397766 version
skaffold_test.go:60: skaffold version: v1.38.0
skaffold_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-20220412193409-177186 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-20220412193409-177186 --memory=2600 --driver=docker  --container-runtime=docker: (24.498469702s)
skaffold_test.go:83: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:107: (dbg) Run:  /tmp/skaffold.exe2725397766 run --minikube-profile skaffold-20220412193409-177186 --kube-context skaffold-20220412193409-177186 --status-check=true --port-forward=false --interactive=false
E0412 19:34:40.347255  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412191306-177186/client.crt: no such file or directory
skaffold_test.go:107: (dbg) Done: /tmp/skaffold.exe2725397766 run --minikube-profile skaffold-20220412193409-177186 --kube-context skaffold-20220412193409-177186 --status-check=true --port-forward=false --interactive=false: (17.034926938s)
skaffold_test.go:113: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:342: "leeroy-app-8fdf99cd-r44fr" [8d67419d-9a8f-4345-98e1-bd87315697a3] Running
skaffold_test.go:113: (dbg) TestSkaffold: app=leeroy-app healthy within 5.010131856s
skaffold_test.go:116: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:342: "leeroy-web-5b65cd679-f7jg8" [c311bb1e-3da1-4196-b1fc-4019dcef70a9] Running
skaffold_test.go:116: (dbg) TestSkaffold: app=leeroy-web healthy within 5.005930977s
helpers_test.go:175: Cleaning up "skaffold-20220412193409-177186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-20220412193409-177186
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-20220412193409-177186: (2.443500589s)
--- PASS: TestSkaffold (54.53s)

                                                
                                    
x
+
TestInsufficientStorage (12.89s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-20220412193504-177186 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-20220412193504-177186 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (10.390059585s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3f224ffc-0196-4168-8509-c04f1cae1e8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220412193504-177186] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"26a4e051-1f12-4a0e-befe-87d94ebdb922","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13812"}}
	{"specversion":"1.0","id":"76ddfe3a-ba89-40fa-bfab-a30621f669ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bf5ed18b-83f2-47f9-b9e0-5bc0294be30a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig"}}
	{"specversion":"1.0","id":"cecca209-6904-499d-adff-899638692a14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube"}}
	{"specversion":"1.0","id":"8e0b04ba-25d2-40eb-88d2-b5a8bd681a98","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"39eaf8d0-f776-4f49-ba46-194ba490c5fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"30d352e0-564b-4365-9e0d-73a84dbf5386","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"36936e05-f641-497a-86a7-9cea44fff1bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3e05e45a-ed62-4cee-8512-4803b2562547","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"Your cgroup does not allow setting memory."}}
	{"specversion":"1.0","id":"da242e16-37ec-4b79-a019-4ca3503ea12f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities"}}
	{"specversion":"1.0","id":"6f63d93f-6441-4d4b-875f-dd0eb2e5141d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with the root privilege"}}
	{"specversion":"1.0","id":"9ec66be2-93f8-4c76-b164-ff4fb376c3b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220412193504-177186 in cluster insufficient-storage-20220412193504-177186","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"29dbcf0e-6b17-458d-a675-c3ce8337554d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"334d17a1-7535-4782-af47-42b0e8c28317","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"e1d9c6c7-e898-41a7-a039-40a5b4a8ffd9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220412193504-177186 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220412193504-177186 --output=json --layout=cluster: exit status 7 (332.427392ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220412193504-177186","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.25.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220412193504-177186","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0412 19:35:14.733262  317247 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220412193504-177186" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220412193504-177186 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220412193504-177186 --output=json --layout=cluster: exit status 7 (331.702554ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220412193504-177186","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.25.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220412193504-177186","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0412 19:35:15.065508  317348 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220412193504-177186" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	E0412 19:35:15.073490  317348 status.go:557] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/insufficient-storage-20220412193504-177186/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-20220412193504-177186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-20220412193504-177186
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-20220412193504-177186: (1.832046719s)
--- PASS: TestInsufficientStorage (12.89s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (67s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.9.0.711907716.exe start -p running-upgrade-20220412193809-177186 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0412 19:38:21.118856  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412191658-177186/client.crt: no such file or directory
version_upgrade_test.go:127: (dbg) Done: /tmp/minikube-v1.9.0.711907716.exe start -p running-upgrade-20220412193809-177186 --memory=2200 --vm-driver=docker  --container-runtime=docker: (36.918600883s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20220412193809-177186 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-20220412193809-177186 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (27.728019866s)
helpers_test.go:175: Cleaning up "running-upgrade-20220412193809-177186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20220412193809-177186
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20220412193809-177186: (1.874539592s)
--- PASS: TestRunningBinaryUpgrade (67.00s)

                                                
                                    
x
+
TestKubernetesUpgrade (79.67s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220412193621-177186 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220412193621-177186 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (45.805199269s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220412193621-177186

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220412193621-177186: (5.085177417s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20220412193621-177186 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20220412193621-177186 status --format={{.Host}}: exit status 7 (94.477971ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220412193621-177186 --memory=2200 --kubernetes-version=v1.23.6-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:250: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220412193621-177186 --memory=2200 --kubernetes-version=v1.23.6-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (22.511904712s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220412193621-177186 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220412193621-177186 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220412193621-177186 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker: exit status 106 (92.623326ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220412193621-177186] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13812
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.23.6-rc.0 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20220412193621-177186
	    minikube start -p kubernetes-upgrade-20220412193621-177186 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220412193621-1771862 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.23.6-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220412193621-177186 --kubernetes-version=v1.23.6-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220412193621-177186 --memory=2200 --kubernetes-version=v1.23.6-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:282: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220412193621-177186 --memory=2200 --kubernetes-version=v1.23.6-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (3.456853613s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220412193621-177186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220412193621-177186
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220412193621-177186: (2.574608081s)
--- PASS: TestKubernetesUpgrade (79.67s)

                                                
                                    
x
+
TestMissingContainerUpgrade (111s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /tmp/minikube-v1.9.1.3321965885.exe start -p missing-upgrade-20220412193516-177186 --memory=2200 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: /tmp/minikube-v1.9.1.3321965885.exe start -p missing-upgrade-20220412193516-177186 --memory=2200 --driver=docker  --container-runtime=docker: (58.667522352s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-20220412193516-177186

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20220412193516-177186: (13.317635112s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-20220412193516-177186
version_upgrade_test.go:336: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-20220412193516-177186 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-20220412193516-177186 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (35.841785976s)
helpers_test.go:175: Cleaning up "missing-upgrade-20220412193516-177186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-20220412193516-177186

                                                
                                                
=== CONT  TestMissingContainerUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-20220412193516-177186: (2.747307764s)
--- PASS: TestMissingContainerUpgrade (111.00s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220412193516-177186 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-20220412193516-177186 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (78.563367ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220412193516-177186] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13812
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (47.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220412193516-177186 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220412193516-177186 --driver=docker  --container-runtime=docker: (46.888059312s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220412193516-177186 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (47.35s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (71.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /tmp/minikube-v1.9.0.1367093482.exe start -p stopped-upgrade-20220412193516-177186 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0412 19:35:24.186768  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412191917-177186/client.crt: no such file or directory
E0412 19:36:03.390637  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412191306-177186/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: /tmp/minikube-v1.9.0.1367093482.exe start -p stopped-upgrade-20220412193516-177186 --memory=2200 --vm-driver=docker  --container-runtime=docker: (46.68749863s)
version_upgrade_test.go:199: (dbg) Run:  /tmp/minikube-v1.9.0.1367093482.exe -p stopped-upgrade-20220412193516-177186 stop

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:199: (dbg) Done: /tmp/minikube-v1.9.0.1367093482.exe -p stopped-upgrade-20220412193516-177186 stop: (2.392041882s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-20220412193516-177186 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-20220412193516-177186 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (22.634490854s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (71.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (15.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220412193516-177186 --no-kubernetes --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220412193516-177186 --no-kubernetes --driver=docker  --container-runtime=docker: (13.131711238s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220412193516-177186 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-20220412193516-177186 status -o json: exit status 2 (355.028672ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220412193516-177186","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-20220412193516-177186

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-20220412193516-177186: (2.115579883s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (15.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (202.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220412193516-177186 --no-kubernetes --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220412193516-177186 --no-kubernetes --driver=docker  --container-runtime=docker: (3m22.023112123s)
--- PASS: TestNoKubernetes/serial/Start (202.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-20220412193516-177186

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-20220412193516-177186: (1.50288468s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.50s)

                                                
                                    
x
+
TestPause/serial/Start (42.07s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220412193916-177186 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0412 19:39:40.348172  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412191306-177186/client.crt: no such file or directory

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220412193916-177186 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (42.072530646s)
--- PASS: TestPause/serial/Start (42.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220412193516-177186 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220412193516-177186 "sudo systemctl is-active --quiet service kubelet": exit status 1 (339.81206ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-20220412193516-177186
E0412 19:39:44.162848  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412191658-177186/client.crt: no such file or directory
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-20220412193516-177186: (1.265839564s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220412193516-177186 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220412193516-177186 --driver=docker  --container-runtime=docker: (5.823207057s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (5.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220412193516-177186 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220412193516-177186 "sudo systemctl is-active --quiet service kubelet": exit status 1 (327.050782ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.45s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220412193916-177186 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0412 19:40:01.793149  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/skaffold-20220412193409-177186/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220412193916-177186 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (6.436645796s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.45s)

                                                
                                    
x
+
TestPause/serial/Pause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20220412193916-177186 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.64s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.37s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20220412193916-177186 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20220412193916-177186 --output=json --layout=cluster: exit status 2 (371.819474ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20220412193916-177186","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.25.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220412193916-177186","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.37s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.57s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20220412193916-177186 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.57s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.85s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20220412193916-177186 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.85s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.45s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20220412193916-177186 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20220412193916-177186 --alsologtostderr -v=5: (2.447097974s)
--- PASS: TestPause/serial/DeletePaused (2.45s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.68s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-20220412193916-177186
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-20220412193916-177186: exit status 1 (36.854292ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20220412193916-177186

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (57.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20220412193701-177186 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=docker
E0412 19:40:12.034119  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/skaffold-20220412193409-177186/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p auto-20220412193701-177186 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=docker: (57.95946736s)
--- PASS: TestNetworkPlugins/group/auto/Start (57.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (51.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p false-20220412193701-177186 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker  --container-runtime=docker
E0412 19:40:32.515097  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/skaffold-20220412193409-177186/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p false-20220412193701-177186 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker  --container-runtime=docker: (51.895446573s)
--- PASS: TestNetworkPlugins/group/false/Start (51.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (90.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20220412193701-177186 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20220412193701-177186 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=docker: (1m30.327854279s)
--- PASS: TestNetworkPlugins/group/cilium/Start (90.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20220412193701-177186 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (14.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context auto-20220412193701-177186 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-f26fg" [3aeccabd-fccc-4c52-8285-f9a41fc7d26c] Pending
helpers_test.go:342: "netcat-668db85669-f26fg" [3aeccabd-fccc-4c52-8285-f9a41fc7d26c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0412 19:41:13.475379  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/skaffold-20220412193409-177186/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
helpers_test.go:342: "netcat-668db85669-f26fg" [3aeccabd-fccc-4c52-8285-f9a41fc7d26c] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 14.008003597s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (14.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-20220412193701-177186 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context false-20220412193701-177186 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-sbbql" [df8e6296-fdca-4fcd-a41a-39c7f652bd4e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
helpers_test.go:342: "netcat-668db85669-sbbql" [df8e6296-fdca-4fcd-a41a-39c7f652bd4e] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.005525337s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:162: (dbg) Run:  kubectl --context auto-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:181: (dbg) Run:  kubectl --context auto-20220412193701-177186 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:231: (dbg) Run:  kubectl --context auto-20220412193701-177186 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:231: (dbg) Non-zero exit: kubectl --context auto-20220412193701-177186 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.141265212s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:162: (dbg) Run:  kubectl --context false-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:181: (dbg) Run:  kubectl --context false-20220412193701-177186 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:231: (dbg) Run:  kubectl --context false-20220412193701-177186 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/HairPin
net_test.go:231: (dbg) Non-zero exit: kubectl --context false-20220412193701-177186 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.159362246s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (292.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20220412193701-177186 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0412 19:41:47.233362  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412191917-177186/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20220412193701-177186 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=docker: (4m52.315071886s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (292.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-zzlnz" [5b5919ec-e753-4bea-9786-32b7454060bb] Running
net_test.go:106: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.016447013s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20220412193701-177186 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (11.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context cilium-20220412193701-177186 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-j529v" [3d15d7a9-d475-486e-8754-3e9ca4df0b54] Pending
helpers_test.go:342: "netcat-668db85669-j529v" [3d15d7a9-d475-486e-8754-3e9ca4df0b54] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-j529v" [3d15d7a9-d475-486e-8754-3e9ca4df0b54] Running
E0412 19:42:35.396286  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/skaffold-20220412193409-177186/client.crt: no such file or directory
net_test.go:145: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 11.006240204s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (11.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:162: (dbg) Run:  kubectl --context cilium-20220412193701-177186 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:181: (dbg) Run:  kubectl --context cilium-20220412193701-177186 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:231: (dbg) Run:  kubectl --context cilium-20220412193701-177186 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (55.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20220412193701-177186 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=docker
E0412 19:43:21.118887  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412191658-177186/client.crt: no such file or directory
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20220412193701-177186 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=docker: (55.619089171s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (55.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-lhg2q" [7b4ead37-0f36-4532-bd2f-180a2775a6f3] Running
net_test.go:106: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.028951355s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20220412193701-177186 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context kindnet-20220412193701-177186 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-jpcfm" [ab2032a7-54e7-44e2-ae01-41ff6463dfc1] Pending
helpers_test.go:342: "netcat-668db85669-jpcfm" [ab2032a7-54e7-44e2-ae01-41ff6463dfc1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-jpcfm" [ab2032a7-54e7-44e2-ae01-41ff6463dfc1] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.005804394s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20220412193701-177186 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context enable-default-cni-20220412193701-177186 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-f6x65" [903e2f78-009b-429c-9ad7-16b717cfaa39] Pending
helpers_test.go:342: "netcat-668db85669-f6x65" [903e2f78-009b-429c-9ad7-16b717cfaa39] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:342: "netcat-668db85669-f6x65" [903e2f78-009b-429c-9ad7-16b717cfaa39] Running
E0412 19:46:38.597477  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/false-20220412193701-177186/client.crt: no such file or directory
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.006262793s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20220412193701-177186 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20220412193701-177186 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=docker: (43.997044518s)
--- PASS: TestNetworkPlugins/group/bridge/Start (44.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (291.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-20220412193701-177186 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0412 19:49:51.552758  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/skaffold-20220412193409-177186/client.crt: no such file or directory
E0412 19:50:02.794300  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412193701-177186/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-20220412193701-177186 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (4m51.408299082s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (291.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (315.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:170: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220412195020-177186 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E0412 19:50:24.187174  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412191917-177186/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:170: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20220412195020-177186 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (5m15.507656054s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (315.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20220412193701-177186 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context bridge-20220412193701-177186 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-2k8xz" [866a83e1-f686-4815-b090-9ad00b395d7b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-2k8xz" [866a83e1-f686-4815-b090-9ad00b395d7b] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.006263351s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (303.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:170: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220412195211-177186 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6-rc.0
E0412 19:52:18.948769  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412193701-177186/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:170: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20220412195211-177186 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6-rc.0: (5m3.780501041s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (303.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-20220412193701-177186 "pgrep -a kubelet"
E0412 19:54:40.347246  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412191306-177186/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context kubenet-20220412193701-177186 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-7wmkw" [2abdc300-9ecd-48db-9be8-b35d642314fa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-7wmkw" [2abdc300-9ecd-48db-9be8-b35d642314fa] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 9.007554848s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (9.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:180: (dbg) Run:  kubectl --context old-k8s-version-20220412195020-177186 create -f testdata/busybox.yaml
start_stop_delete_test.go:180: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [81ba6e94-c7ab-4ea5-aac7-1e95690538eb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
helpers_test.go:342: "busybox" [81ba6e94-c7ab-4ea5-aac7-1e95690538eb] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:180: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.010914281s
start_stop_delete_test.go:180: (dbg) Run:  kubectl --context old-k8s-version-20220412195020-177186 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (38.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:170: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220412195542-177186 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:170: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20220412195542-177186 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5: (38.898225904s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (38.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:189: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20220412195020-177186 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:199: (dbg) Run:  kubectl --context old-k8s-version-20220412195020-177186 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20220412195020-177186 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:212: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20220412195020-177186 --alsologtostderr -v=3: (11.101033067s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:223: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220412195020-177186 -n old-k8s-version-20220412195020-177186
start_stop_delete_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220412195020-177186 -n old-k8s-version-20220412195020-177186: exit status 7 (100.102283ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:223: status error: exit status 7 (may be ok)
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20220412195020-177186 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (562.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:240: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220412195020-177186 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E0412 19:56:09.091529  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412193701-177186/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:240: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20220412195020-177186 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (9m22.089657052s)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220412195020-177186 -n old-k8s-version-20220412195020-177186
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (562.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:180: (dbg) Run:  kubectl --context embed-certs-20220412195542-177186 create -f testdata/busybox.yaml
start_stop_delete_test.go:180: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [9c3fbd46-8e62-41ed-aed0-8455ccfb715a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [9c3fbd46-8e62-41ed-aed0-8455ccfb715a] Running
E0412 19:56:24.163710  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412191658-177186/client.crt: no such file or directory
start_stop_delete_test.go:180: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.012728725s
start_stop_delete_test.go:180: (dbg) Run:  kubectl --context embed-certs-20220412195542-177186 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:189: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20220412195542-177186 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:199: (dbg) Run:  kubectl --context embed-certs-20220412195542-177186 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20220412195542-177186 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:212: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20220412195542-177186 --alsologtostderr -v=3: (10.82612206s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:223: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220412195542-177186 -n embed-certs-20220412195542-177186
start_stop_delete_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220412195542-177186 -n embed-certs-20220412195542-177186: exit status 7 (114.453811ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:223: status error: exit status 7 (may be ok)
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20220412195542-177186 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (338.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:240: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220412195542-177186 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5
E0412 19:56:43.896376  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412193701-177186/client.crt: no such file or directory
E0412 19:56:54.137101  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412193701-177186/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:240: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20220412195542-177186 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5: (5m37.906376505s)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220412195542-177186 -n embed-certs-20220412195542-177186
E0412 20:02:18.949076  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412193701-177186/client.crt: no such file or directory
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (338.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:180: (dbg) Run:  kubectl --context no-preload-20220412195211-177186 create -f testdata/busybox.yaml
start_stop_delete_test.go:180: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [b1e13366-314a-4866-82bd-56621644dcec] Pending
helpers_test.go:342: "busybox" [b1e13366-314a-4866-82bd-56621644dcec] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [b1e13366-314a-4866-82bd-56621644dcec] Running
E0412 19:57:18.949169  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412193701-177186/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:180: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.009860948s
start_stop_delete_test.go:180: (dbg) Run:  kubectl --context no-preload-20220412195211-177186 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:189: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20220412195211-177186 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:199: (dbg) Run:  kubectl --context no-preload-20220412195211-177186 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20220412195211-177186 --alsologtostderr -v=3
start_stop_delete_test.go:212: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20220412195211-177186 --alsologtostderr -v=3: (10.924543962s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:223: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220412195211-177186 -n no-preload-20220412195211-177186
start_stop_delete_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220412195211-177186 -n no-preload-20220412195211-177186: exit status 7 (93.416544ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:223: status error: exit status 7 (may be ok)
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20220412195211-177186 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (594.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:240: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220412195211-177186 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:240: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20220412195211-177186 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6-rc.0: (9m54.572696892s)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220412195211-177186 -n no-preload-20220412195211-177186
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (594.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (289.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:170: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220412200103-177186 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5
E0412 20:01:09.091394  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412193701-177186/client.crt: no such file or directory
E0412 20:01:13.097450  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412193701-177186/client.crt: no such file or directory
E0412 20:01:18.115727  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/false-20220412193701-177186/client.crt: no such file or directory
E0412 20:01:33.655288  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412193701-177186/client.crt: no such file or directory
E0412 20:01:54.058094  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412193701-177186/client.crt: no such file or directory
E0412 20:02:01.340082  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412193701-177186/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:170: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20220412200103-177186 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5: (4m49.218352049s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (289.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:258: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-x9hzm" [4b61917c-8625-4beb-ab64-6472cd1990e2] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-8469778f77-x9hzm" [4b61917c-8625-4beb-ab64-6472cd1990e2] Running
E0412 20:02:32.135872  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412193701-177186/client.crt: no such file or directory
start_stop_delete_test.go:258: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.013640224s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:271: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-x9hzm" [4b61917c-8625-4beb-ab64-6472cd1990e2] Running
start_stop_delete_test.go:271: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007016605s
start_stop_delete_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220412195542-177186 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:288: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20220412195542-177186 "sudo crictl images -o json"
start_stop_delete_test.go:288: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20220412195542-177186 --alsologtostderr -v=1
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220412195542-177186 -n embed-certs-20220412195542-177186
start_stop_delete_test.go:295: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220412195542-177186 -n embed-certs-20220412195542-177186: exit status 2 (377.611441ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:295: status error: exit status 2 (may be ok)
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220412195542-177186 -n embed-certs-20220412195542-177186
start_stop_delete_test.go:295: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220412195542-177186 -n embed-certs-20220412195542-177186: exit status 2 (428.465599ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:295: status error: exit status 2 (may be ok)
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-20220412195542-177186 --alsologtostderr -v=1
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220412195542-177186 -n embed-certs-20220412195542-177186
E0412 20:02:41.160546  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/false-20220412193701-177186/client.crt: no such file or directory
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220412195542-177186 -n embed-certs-20220412195542-177186
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:170: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220412200244-177186 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6-rc.0
E0412 20:03:15.979124  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412193701-177186/client.crt: no such file or directory
E0412 20:03:21.118500  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412191658-177186/client.crt: no such file or directory
start_stop_delete_test.go:170: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220412200244-177186 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6-rc.0: (39.697897449s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:189: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20220412200244-177186 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:195: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20220412200244-177186 --alsologtostderr -v=3
E0412 20:03:35.306837  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412193701-177186/client.crt: no such file or directory
start_stop_delete_test.go:212: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20220412200244-177186 --alsologtostderr -v=3: (10.889593082s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:223: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220412200244-177186 -n newest-cni-20220412200244-177186
start_stop_delete_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220412200244-177186 -n newest-cni-20220412200244-177186: exit status 7 (94.079879ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:223: status error: exit status 7 (may be ok)
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20220412200244-177186 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (19.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:240: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220412200244-177186 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6-rc.0
E0412 20:03:41.994927  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412193701-177186/client.crt: no such file or directory
start_stop_delete_test.go:240: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220412200244-177186 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6-rc.0: (18.827599592s)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220412200244-177186 -n newest-cni-20220412200244-177186
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (19.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:257: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:268: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:288: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20220412200244-177186 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20220412200244-177186 --alsologtostderr -v=1
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220412200244-177186 -n newest-cni-20220412200244-177186
start_stop_delete_test.go:295: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220412200244-177186 -n newest-cni-20220412200244-177186: exit status 2 (380.911506ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:295: status error: exit status 2 (may be ok)
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220412200244-177186 -n newest-cni-20220412200244-177186
start_stop_delete_test.go:295: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220412200244-177186 -n newest-cni-20220412200244-177186: exit status 2 (386.998967ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:295: status error: exit status 2 (may be ok)
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-20220412200244-177186 --alsologtostderr -v=1
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220412200244-177186 -n newest-cni-20220412200244-177186
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220412200244-177186 -n newest-cni-20220412200244-177186
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:258: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-6fb5469cf5-c5m6l" [87c68641-345c-49b4-bcc4-dd3ba898044a] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0412 20:05:21.506895  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kubenet-20220412193701-177186/client.crt: no such file or directory
E0412 20:05:24.187275  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412191917-177186/client.crt: no such file or directory
start_stop_delete_test.go:258: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012064889s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:271: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-6fb5469cf5-c5m6l" [87c68641-345c-49b4-bcc4-dd3ba898044a] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:271: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006403121s
start_stop_delete_test.go:275: (dbg) Run:  kubectl --context old-k8s-version-20220412195020-177186 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:288: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20220412195020-177186 "sudo crictl images -o json"
start_stop_delete_test.go:288: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20220412195020-177186 --alsologtostderr -v=1
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220412195020-177186 -n old-k8s-version-20220412195020-177186
start_stop_delete_test.go:295: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220412195020-177186 -n old-k8s-version-20220412195020-177186: exit status 2 (364.023035ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:295: status error: exit status 2 (may be ok)
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220412195020-177186 -n old-k8s-version-20220412195020-177186
E0412 20:05:32.134692  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412193701-177186/client.crt: no such file or directory
start_stop_delete_test.go:295: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220412195020-177186 -n old-k8s-version-20220412195020-177186: exit status 2 (369.513892ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:295: status error: exit status 2 (may be ok)
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-20220412195020-177186 --alsologtostderr -v=1
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220412195020-177186 -n old-k8s-version-20220412195020-177186
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220412195020-177186 -n old-k8s-version-20220412195020-177186
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (7.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:180: (dbg) Run:  kubectl --context default-k8s-different-port-20220412200103-177186 create -f testdata/busybox.yaml
start_stop_delete_test.go:180: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [fb4cebdd-7f6a-4df1-8237-7eb45feb8871] Pending
helpers_test.go:342: "busybox" [fb4cebdd-7f6a-4df1-8237-7eb45feb8871] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [fb4cebdd-7f6a-4df1-8237-7eb45feb8871] Running
E0412 20:05:57.154819  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412195020-177186/client.crt: no such file or directory
start_stop_delete_test.go:180: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 7.010797729s
start_stop_delete_test.go:180: (dbg) Run:  kubectl --context default-k8s-different-port-20220412200103-177186 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (7.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:189: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20220412200103-177186 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0412 20:05:59.819913  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412193701-177186/client.crt: no such file or directory
start_stop_delete_test.go:199: (dbg) Run:  kubectl --context default-k8s-different-port-20220412200103-177186 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (10.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20220412200103-177186 --alsologtostderr -v=3
E0412 20:06:02.467623  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kubenet-20220412193701-177186/client.crt: no such file or directory
E0412 20:06:09.091726  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412193701-177186/client.crt: no such file or directory
start_stop_delete_test.go:212: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20220412200103-177186 --alsologtostderr -v=3: (10.810201712s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (10.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:223: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220412200103-177186 -n default-k8s-different-port-20220412200103-177186
start_stop_delete_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220412200103-177186 -n default-k8s-different-port-20220412200103-177186: exit status 7 (92.598534ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:223: status error: exit status 7 (may be ok)
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20220412200103-177186 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (570.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:240: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220412200103-177186 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5
E0412 20:06:17.635289  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412195020-177186/client.crt: no such file or directory
E0412 20:06:18.114902  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/false-20220412193701-177186/client.crt: no such file or directory
E0412 20:06:33.655643  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412193701-177186/client.crt: no such file or directory
E0412 20:06:58.595892  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412195020-177186/client.crt: no such file or directory
E0412 20:07:18.949536  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412193701-177186/client.crt: no such file or directory
E0412 20:07:24.387807  177186 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13812-173836-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kubenet-20220412193701-177186/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:240: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20220412200103-177186 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.5: (9m29.876128016s)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220412200103-177186 -n default-k8s-different-port-20220412200103-177186
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (570.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:258: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-rdrj6" [d80c57bb-99af-44d9-b733-678a013c8b80] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:258: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01118833s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:271: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-rdrj6" [d80c57bb-99af-44d9-b733-678a013c8b80] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:271: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006553056s
start_stop_delete_test.go:275: (dbg) Run:  kubectl --context no-preload-20220412195211-177186 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:288: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-20220412195211-177186 "sudo crictl images -o json"
start_stop_delete_test.go:288: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-20220412195211-177186 --alsologtostderr -v=1
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220412195211-177186 -n no-preload-20220412195211-177186
start_stop_delete_test.go:295: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220412195211-177186 -n no-preload-20220412195211-177186: exit status 2 (364.590659ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:295: status error: exit status 2 (may be ok)
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220412195211-177186 -n no-preload-20220412195211-177186
start_stop_delete_test.go:295: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220412195211-177186 -n no-preload-20220412195211-177186: exit status 2 (363.736484ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:295: status error: exit status 2 (may be ok)
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-20220412195211-177186 --alsologtostderr -v=1
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220412195211-177186 -n no-preload-20220412195211-177186
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220412195211-177186 -n no-preload-20220412195211-177186
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:258: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-l77qg" [6f35dbbc-04cf-4e07-8341-b23930c3067a] Running
start_stop_delete_test.go:258: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012232227s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:271: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-l77qg" [6f35dbbc-04cf-4e07-8341-b23930c3067a] Running
helpers_test.go:342: "kubernetes-dashboard-8469778f77-l77qg" [6f35dbbc-04cf-4e07-8341-b23930c3067a] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:271: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00626543s
start_stop_delete_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220412200103-177186 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:288: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-different-port-20220412200103-177186 "sudo crictl images -o json"
start_stop_delete_test.go:288: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (2.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-different-port-20220412200103-177186 --alsologtostderr -v=1
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220412200103-177186 -n default-k8s-different-port-20220412200103-177186
start_stop_delete_test.go:295: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220412200103-177186 -n default-k8s-different-port-20220412200103-177186: exit status 2 (354.206308ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:295: status error: exit status 2 (may be ok)
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220412200103-177186 -n default-k8s-different-port-20220412200103-177186
start_stop_delete_test.go:295: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220412200103-177186 -n default-k8s-different-port-20220412200103-177186: exit status 2 (365.919512ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:295: status error: exit status 2 (may be ok)
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-different-port-20220412200103-177186 --alsologtostderr -v=1
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220412200103-177186 -n default-k8s-different-port-20220412200103-177186
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220412200103-177186 -n default-k8s-different-port-20220412200103-177186
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (2.88s)

                                                
                                    

Test skip (21/285)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.5/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.5/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.23.5/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.6-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.6-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.23.6-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:448: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:545: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:76: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20220412193701-177186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p flannel-20220412193701-177186
--- SKIP: TestNetworkPlugins/group/flannel (0.38s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:102: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20220412200102-177186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20220412200102-177186
--- SKIP: TestStartStop/group/disable-driver-mounts (0.42s)

                                                
                                    
Copied to clipboard