Test Report: Docker_Linux 13439

                    
                      75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb:2022-02-07:22575
                    
                

Test fail (12/279)

x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (215.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220207194436-6868 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-20220207194436-6868 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: exit status 80 (3m35.0347809s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220207194436-6868] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13439
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node old-k8s-version-20220207194436-6868 in cluster old-k8s-version-20220207194436-6868
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "old-k8s-version-20220207194436-6868" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0207 19:44:36.548981  214480 out.go:297] Setting OutFile to fd 1 ...
	I0207 19:44:36.549073  214480 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:44:36.549102  214480 out.go:310] Setting ErrFile to fd 2...
	I0207 19:44:36.549109  214480 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:44:36.549251  214480 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/bin
	I0207 19:44:36.549554  214480 out.go:304] Setting JSON to false
	I0207 19:44:36.551320  214480 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":5233,"bootTime":1644257844,"procs":836,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0207 19:44:36.551404  214480 start.go:122] virtualization: kvm guest
	I0207 19:44:36.554403  214480 out.go:176] * [old-k8s-version-20220207194436-6868] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	I0207 19:44:36.555979  214480 out.go:176]   - MINIKUBE_LOCATION=13439
	I0207 19:44:36.554592  214480 notify.go:174] Checking for updates...
	I0207 19:44:36.557641  214480 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0207 19:44:36.558904  214480 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/kubeconfig
	I0207 19:44:36.560340  214480 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube
	I0207 19:44:36.561771  214480 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0207 19:44:36.562495  214480 config.go:176] Loaded profile config "cert-expiration-20220207194331-6868": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 19:44:36.562602  214480 config.go:176] Loaded profile config "cert-options-20220207194401-6868": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 19:44:36.562678  214480 config.go:176] Loaded profile config "docker-flags-20220207194358-6868": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 19:44:36.562740  214480 driver.go:344] Setting default libvirt URI to qemu:///system
	I0207 19:44:36.627097  214480 docker.go:132] docker version: linux-20.10.12
	I0207 19:44:36.627215  214480 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 19:44:36.744178  214480 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:48 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:44 SystemTime:2022-02-07 19:44:36.674103495 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663643648 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0207 19:44:36.744322  214480 docker.go:237] overlay module found
	I0207 19:44:36.747283  214480 out.go:176] * Using the docker driver based on user configuration
	I0207 19:44:36.747326  214480 start.go:281] selected driver: docker
	I0207 19:44:36.747334  214480 start.go:798] validating driver "docker" against <nil>
	I0207 19:44:36.747359  214480 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0207 19:44:36.747411  214480 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0207 19:44:36.747432  214480 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0207 19:44:36.749095  214480 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0207 19:44:36.749969  214480 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 19:44:36.906303  214480 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:48 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:44 SystemTime:2022-02-07 19:44:36.837751736 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663643648 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0207 19:44:36.906524  214480 start_flags.go:288] no existing cluster config was found, will generate one from the flags 
	I0207 19:44:36.906684  214480 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	I0207 19:44:36.906709  214480 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0207 19:44:36.906726  214480 cni.go:93] Creating CNI manager for ""
	I0207 19:44:36.906736  214480 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0207 19:44:36.906745  214480 start_flags.go:302] config:
	{Name:old-k8s-version-20220207194436-6868 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220207194436-6868 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0207 19:44:36.910114  214480 out.go:176] * Starting control plane node old-k8s-version-20220207194436-6868 in cluster old-k8s-version-20220207194436-6868
	I0207 19:44:36.910164  214480 cache.go:120] Beginning downloading kic base image for docker with docker
	I0207 19:44:36.911484  214480 out.go:176] * Pulling base image ...
	I0207 19:44:36.911516  214480 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0207 19:44:36.911549  214480 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0207 19:44:36.911569  214480 cache.go:57] Caching tarball of preloaded images
	I0207 19:44:36.911612  214480 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon
	I0207 19:44:36.911813  214480 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0207 19:44:36.911828  214480 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0207 19:44:36.911946  214480 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/old-k8s-version-20220207194436-6868/config.json ...
	I0207 19:44:36.911983  214480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/old-k8s-version-20220207194436-6868/config.json: {Name:mkafe9e03efea0dac0a7ee86c97f762aa9786f0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:44:36.952960  214480 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon, skipping pull
	I0207 19:44:36.952985  214480 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 exists in daemon, skipping load
	I0207 19:44:36.953001  214480 cache.go:208] Successfully downloaded all kic artifacts
	I0207 19:44:36.953032  214480 start.go:313] acquiring machines lock for old-k8s-version-20220207194436-6868: {Name:mk692aec54903d70e67a75e357402925de663e7a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0207 19:44:36.953184  214480 start.go:317] acquired machines lock for "old-k8s-version-20220207194436-6868" in 133.64µs
	I0207 19:44:36.953209  214480 start.go:89] Provisioning new machine with config: &{Name:old-k8s-version-20220207194436-6868 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220207194436-6868 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0207 19:44:36.953279  214480 start.go:126] createHost starting for "" (driver="docker")
	I0207 19:44:36.955768  214480 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0207 19:44:36.956006  214480 start.go:160] libmachine.API.Create for "old-k8s-version-20220207194436-6868" (driver="docker")
	I0207 19:44:36.956038  214480 client.go:168] LocalClient.Create starting
	I0207 19:44:36.956130  214480 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca.pem
	I0207 19:44:36.956159  214480 main.go:130] libmachine: Decoding PEM data...
	I0207 19:44:36.956176  214480 main.go:130] libmachine: Parsing certificate...
	I0207 19:44:36.956231  214480 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/cert.pem
	I0207 19:44:36.956246  214480 main.go:130] libmachine: Decoding PEM data...
	I0207 19:44:36.956260  214480 main.go:130] libmachine: Parsing certificate...
	I0207 19:44:36.956565  214480 cli_runner.go:133] Run: docker network inspect old-k8s-version-20220207194436-6868 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0207 19:44:36.993009  214480 cli_runner.go:180] docker network inspect old-k8s-version-20220207194436-6868 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0207 19:44:36.993114  214480 network_create.go:254] running [docker network inspect old-k8s-version-20220207194436-6868] to gather additional debugging logs...
	I0207 19:44:36.993147  214480 cli_runner.go:133] Run: docker network inspect old-k8s-version-20220207194436-6868
	W0207 19:44:37.027845  214480 cli_runner.go:180] docker network inspect old-k8s-version-20220207194436-6868 returned with exit code 1
	I0207 19:44:37.027880  214480 network_create.go:257] error running [docker network inspect old-k8s-version-20220207194436-6868]: docker network inspect old-k8s-version-20220207194436-6868: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220207194436-6868
	I0207 19:44:37.027908  214480 network_create.go:259] output of [docker network inspect old-k8s-version-20220207194436-6868]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220207194436-6868
	
	** /stderr **
	I0207 19:44:37.027959  214480 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0207 19:44:37.063010  214480 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000010be8] misses:0}
	I0207 19:44:37.063079  214480 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 19:44:37.063105  214480 network_create.go:106] attempt to create docker network old-k8s-version-20220207194436-6868 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0207 19:44:37.063161  214480 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220207194436-6868
	I0207 19:44:37.135771  214480 network_create.go:90] docker network old-k8s-version-20220207194436-6868 192.168.49.0/24 created
	I0207 19:44:37.135810  214480 kic.go:106] calculated static IP "192.168.49.2" for the "old-k8s-version-20220207194436-6868" container
	I0207 19:44:37.135873  214480 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0207 19:44:37.171894  214480 cli_runner.go:133] Run: docker volume create old-k8s-version-20220207194436-6868 --label name.minikube.sigs.k8s.io=old-k8s-version-20220207194436-6868 --label created_by.minikube.sigs.k8s.io=true
	I0207 19:44:37.206859  214480 oci.go:102] Successfully created a docker volume old-k8s-version-20220207194436-6868
	I0207 19:44:37.206959  214480 cli_runner.go:133] Run: docker run --rm --name old-k8s-version-20220207194436-6868-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220207194436-6868 --entrypoint /usr/bin/test -v old-k8s-version-20220207194436-6868:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -d /var/lib
	I0207 19:44:37.771845  214480 oci.go:106] Successfully prepared a docker volume old-k8s-version-20220207194436-6868
	I0207 19:44:37.771887  214480 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0207 19:44:37.771910  214480 kic.go:179] Starting extracting preloaded images to volume ...
	I0207 19:44:37.771972  214480 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220207194436-6868:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0207 19:44:44.623627  214480 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220207194436-6868:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir: (6.851595977s)
	I0207 19:44:44.623668  214480 kic.go:188] duration metric: took 6.851755 seconds to extract preloaded images to volume
	W0207 19:44:44.623704  214480 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0207 19:44:44.623714  214480 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0207 19:44:44.623770  214480 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0207 19:44:44.732454  214480 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220207194436-6868 --name old-k8s-version-20220207194436-6868 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220207194436-6868 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220207194436-6868 --network old-k8s-version-20220207194436-6868 --ip 192.168.49.2 --volume old-k8s-version-20220207194436-6868:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8
	W0207 19:44:44.812265  214480 cli_runner.go:180] docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220207194436-6868 --name old-k8s-version-20220207194436-6868 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220207194436-6868 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220207194436-6868 --network old-k8s-version-20220207194436-6868 --ip 192.168.49.2 --volume old-k8s-version-20220207194436-6868:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 returned with exit code 125
	I0207 19:44:44.812360  214480 client.go:171] LocalClient.Create took 7.856315109s
	I0207 19:44:46.813321  214480 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0207 19:44:46.813397  214480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220207194436-6868
	W0207 19:44:46.848351  214480 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220207194436-6868 returned with exit code 1
	I0207 19:44:46.848436  214480 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 19:44:47.124904  214480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220207194436-6868
	W0207 19:44:47.163496  214480 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220207194436-6868 returned with exit code 1
	I0207 19:44:47.163642  214480 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 19:44:47.704389  214480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220207194436-6868
	W0207 19:44:47.738378  214480 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220207194436-6868 returned with exit code 1
	I0207 19:44:47.738473  214480 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 19:44:48.394324  214480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220207194436-6868
	W0207 19:44:48.432731  214480 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220207194436-6868 returned with exit code 1
	W0207 19:44:48.432849  214480 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0207 19:44:48.432864  214480 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 19:44:48.432872  214480 start.go:129] duration metric: createHost completed in 11.479587912s
	I0207 19:44:48.432880  214480 start.go:80] releasing machines lock for "old-k8s-version-20220207194436-6868", held for 11.479684317s
	W0207 19:44:48.432910  214480 start.go:570] error starting host: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220207194436-6868 --name old-k8s-version-20220207194436-6868 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220207194436-6868 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220207194436-6868 --network old-k8s-version-20220207194436-6868 --ip 192.168.49.2 --volume old-k8s-version-20220207194436-6868:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3
e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	23f3379b998482549d9e28defb320e50849d4f53f3f745ddbd099a37eb5a851a
	
	stderr:
	docker: Error response from daemon: network old-k8s-version-20220207194436-6868 not found.
	I0207 19:44:48.433366  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	W0207 19:44:48.472353  214480 start.go:575] delete host: Docker machine "old-k8s-version-20220207194436-6868" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	W0207 19:44:48.472570  214480 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220207194436-6868 --name old-k8s-version-20220207194436-6868 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220207194436-6868 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220207194436-6868 --network old-k8s-version-20220207194436-6868 --ip 192.168.49.2 --volume old-k8s-version-20220207194436-6868:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658
fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	23f3379b998482549d9e28defb320e50849d4f53f3f745ddbd099a37eb5a851a
	
	stderr:
	docker: Error response from daemon: network old-k8s-version-20220207194436-6868 not found.
	
	! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220207194436-6868 --name old-k8s-version-20220207194436-6868 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220207194436-6868 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220207194436-6868 --network old-k8s-version-20220207194436-6868 --ip 192.168.49.2 --volume old-k8s-version-20220207194436-6868:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: ex
it status 125
	stdout:
	23f3379b998482549d9e28defb320e50849d4f53f3f745ddbd099a37eb5a851a
	
	stderr:
	docker: Error response from daemon: network old-k8s-version-20220207194436-6868 not found.
	
	I0207 19:44:48.472592  214480 start.go:585] Will try again in 5 seconds ...
	I0207 19:44:53.474480  214480 start.go:313] acquiring machines lock for old-k8s-version-20220207194436-6868: {Name:mk692aec54903d70e67a75e357402925de663e7a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0207 19:44:53.474640  214480 start.go:317] acquired machines lock for "old-k8s-version-20220207194436-6868" in 119.934µs
	I0207 19:44:53.474669  214480 start.go:93] Skipping create...Using existing machine configuration
	I0207 19:44:53.474680  214480 fix.go:55] fixHost starting: 
	I0207 19:44:53.474946  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:44:53.510911  214480 fix.go:108] recreateIfNeeded on old-k8s-version-20220207194436-6868: state= err=<nil>
	I0207 19:44:53.510943  214480 fix.go:113] machineExists: false. err=machine does not exist
	I0207 19:44:53.548464  214480 out.go:176] * docker "old-k8s-version-20220207194436-6868" container is missing, will recreate.
	I0207 19:44:53.548519  214480 delete.go:124] DEMOLISHING old-k8s-version-20220207194436-6868 ...
	I0207 19:44:53.548615  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:44:53.585575  214480 stop.go:79] host is in state 
	I0207 19:44:53.585654  214480 main.go:130] libmachine: Stopping "old-k8s-version-20220207194436-6868"...
	I0207 19:44:53.585708  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:44:53.623395  214480 kic_runner.go:93] Run: systemctl --version
	I0207 19:44:53.623424  214480 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20220207194436-6868 systemctl --version]
	I0207 19:44:53.663346  214480 kic_runner.go:93] Run: sudo service kubelet stop
	I0207 19:44:53.663369  214480 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20220207194436-6868 sudo service kubelet stop]
	I0207 19:44:53.701809  214480 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container 23f3379b998482549d9e28defb320e50849d4f53f3f745ddbd099a37eb5a851a is not running
	
	** /stderr **
	W0207 19:44:53.701849  214480 kic.go:443] couldn't stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 23f3379b998482549d9e28defb320e50849d4f53f3f745ddbd099a37eb5a851a is not running
	I0207 19:44:53.701909  214480 kic_runner.go:93] Run: sudo service kubelet stop
	I0207 19:44:53.701925  214480 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20220207194436-6868 sudo service kubelet stop]
	I0207 19:44:53.740503  214480 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container 23f3379b998482549d9e28defb320e50849d4f53f3f745ddbd099a37eb5a851a is not running
	
	** /stderr **
	W0207 19:44:53.740560  214480 kic.go:445] couldn't force stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 23f3379b998482549d9e28defb320e50849d4f53f3f745ddbd099a37eb5a851a is not running
	I0207 19:44:53.740661  214480 kic_runner.go:93] Run: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}
	I0207 19:44:53.740683  214480 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20220207194436-6868 docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}]
	I0207 19:44:53.782045  214480 kic.go:456] unable list containers : docker: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 23f3379b998482549d9e28defb320e50849d4f53f3f745ddbd099a37eb5a851a is not running
	I0207 19:44:53.782081  214480 kic.go:466] successfully stopped kubernetes!
	I0207 19:44:53.782137  214480 kic_runner.go:93] Run: pgrep kube-apiserver
	I0207 19:44:53.782149  214480 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20220207194436-6868 pgrep kube-apiserver]
	I0207 19:44:53.856719  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:44:56.892236  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:44:59.927590  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:45:02.961718  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:45:06.004535  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:45:09.041976  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:45:12.089205  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:45:15.126532  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:45:18.162513  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:45:21.199574  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:45:24.240109  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:45:27.276440  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:45:30.313907  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:45:33.350434  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:45:36.386476  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:45:39.422510  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:45:42.457553  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:45:45.494512  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:45:48.530496  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:45:51.567282  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:45:54.603555  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:45:57.642769  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:46:00.679846  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:46:03.716313  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:46:06.755354  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:46:09.792499  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:46:12.829924  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:46:15.866482  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:46:18.902560  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:46:21.940153  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:46:24.977570  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:46:28.016147  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:46:31.050475  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:46:34.087418  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:46:37.121433  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:46:40.158490  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:46:43.193258  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:46:46.227614  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:46:49.263813  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:46:52.297419  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:46:55.332598  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:46:58.366506  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:47:01.402464  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:47:04.436846  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:47:07.471311  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:47:10.510509  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:47:13.546458  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:47:16.583869  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:47:19.619897  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:47:22.660979  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:47:25.695525  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:47:28.738469  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:47:31.772867  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:47:34.813651  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:47:37.852120  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:47:40.898603  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:47:43.954670  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:47:46.991728  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:47:50.031564  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:47:53.086494  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:47:56.121720  214480 stop.go:59] stop err: Maximum number of retries (60) exceeded
	I0207 19:47:56.121788  214480 delete.go:129] stophost failed (probably ok): Temporary Error: stop: Maximum number of retries (60) exceeded
	I0207 19:47:56.122201  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	W0207 19:47:56.175598  214480 delete.go:135] deletehost failed: Docker machine "old-k8s-version-20220207194436-6868" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0207 19:47:56.175706  214480 cli_runner.go:133] Run: docker container inspect -f {{.Id}} old-k8s-version-20220207194436-6868
	I0207 19:47:56.211656  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:47:56.249277  214480 cli_runner.go:133] Run: docker exec --privileged -t old-k8s-version-20220207194436-6868 /bin/bash -c "sudo init 0"
	W0207 19:47:56.294287  214480 cli_runner.go:180] docker exec --privileged -t old-k8s-version-20220207194436-6868 /bin/bash -c "sudo init 0" returned with exit code 1
	I0207 19:47:56.294319  214480 oci.go:659] error shutdown old-k8s-version-20220207194436-6868: docker exec --privileged -t old-k8s-version-20220207194436-6868 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 23f3379b998482549d9e28defb320e50849d4f53f3f745ddbd099a37eb5a851a is not running
	I0207 19:47:57.294502  214480 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:47:57.336445  214480 oci.go:673] temporary error: container old-k8s-version-20220207194436-6868 status is  but expect it to be exited
	I0207 19:47:57.336477  214480 oci.go:679] Successfully shutdown container old-k8s-version-20220207194436-6868
	I0207 19:47:57.336517  214480 cli_runner.go:133] Run: docker rm -f -v old-k8s-version-20220207194436-6868
	I0207 19:47:57.382817  214480 cli_runner.go:133] Run: docker container inspect -f {{.Id}} old-k8s-version-20220207194436-6868
	W0207 19:47:57.417852  214480 cli_runner.go:180] docker container inspect -f {{.Id}} old-k8s-version-20220207194436-6868 returned with exit code 1
	I0207 19:47:57.417935  214480 cli_runner.go:133] Run: docker network inspect old-k8s-version-20220207194436-6868 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0207 19:47:57.457175  214480 cli_runner.go:180] docker network inspect old-k8s-version-20220207194436-6868 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0207 19:47:57.457250  214480 network_create.go:254] running [docker network inspect old-k8s-version-20220207194436-6868] to gather additional debugging logs...
	I0207 19:47:57.457270  214480 cli_runner.go:133] Run: docker network inspect old-k8s-version-20220207194436-6868
	W0207 19:47:57.494496  214480 cli_runner.go:180] docker network inspect old-k8s-version-20220207194436-6868 returned with exit code 1
	I0207 19:47:57.494535  214480 network_create.go:257] error running [docker network inspect old-k8s-version-20220207194436-6868]: docker network inspect old-k8s-version-20220207194436-6868: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220207194436-6868
	I0207 19:47:57.494556  214480 network_create.go:259] output of [docker network inspect old-k8s-version-20220207194436-6868]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220207194436-6868
	
	** /stderr **
	W0207 19:47:57.494733  214480 delete.go:139] delete failed (probably ok) <nil>
	I0207 19:47:57.494747  214480 fix.go:120] Sleeping 1 second for extra luck!
	I0207 19:47:58.495520  214480 start.go:126] createHost starting for "" (driver="docker")
	I0207 19:47:58.497682  214480 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0207 19:47:58.497827  214480 start.go:160] libmachine.API.Create for "old-k8s-version-20220207194436-6868" (driver="docker")
	I0207 19:47:58.497868  214480 client.go:168] LocalClient.Create starting
	I0207 19:47:58.497967  214480 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca.pem
	I0207 19:47:58.498013  214480 main.go:130] libmachine: Decoding PEM data...
	I0207 19:47:58.498041  214480 main.go:130] libmachine: Parsing certificate...
	I0207 19:47:58.498114  214480 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/cert.pem
	I0207 19:47:58.498139  214480 main.go:130] libmachine: Decoding PEM data...
	I0207 19:47:58.498159  214480 main.go:130] libmachine: Parsing certificate...
	I0207 19:47:58.498426  214480 cli_runner.go:133] Run: docker network inspect old-k8s-version-20220207194436-6868 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0207 19:47:58.532361  214480 cli_runner.go:180] docker network inspect old-k8s-version-20220207194436-6868 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0207 19:47:58.532425  214480 network_create.go:254] running [docker network inspect old-k8s-version-20220207194436-6868] to gather additional debugging logs...
	I0207 19:47:58.532470  214480 cli_runner.go:133] Run: docker network inspect old-k8s-version-20220207194436-6868
	W0207 19:47:58.566951  214480 cli_runner.go:180] docker network inspect old-k8s-version-20220207194436-6868 returned with exit code 1
	I0207 19:47:58.566994  214480 network_create.go:257] error running [docker network inspect old-k8s-version-20220207194436-6868]: docker network inspect old-k8s-version-20220207194436-6868: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220207194436-6868
	I0207 19:47:58.567012  214480 network_create.go:259] output of [docker network inspect old-k8s-version-20220207194436-6868]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220207194436-6868
	
	** /stderr **
	I0207 19:47:58.567068  214480 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0207 19:47:58.603122  214480 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-53dfd4938408 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:00:0f:9c:a5}}
	I0207 19:47:58.603796  214480 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-03eb3dcbfee4 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:4a:b7:ec:46}}
	I0207 19:47:58.604505  214480 network.go:240] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-87e2158a238e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:9e:14:f0:2a}}
	I0207 19:47:58.605384  214480 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000010be8 192.168.76.0:0xc000722450] misses:0}
	I0207 19:47:58.605424  214480 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 19:47:58.605437  214480 network_create.go:106] attempt to create docker network old-k8s-version-20220207194436-6868 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0207 19:47:58.605493  214480 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220207194436-6868
	I0207 19:47:58.689670  214480 network_create.go:90] docker network old-k8s-version-20220207194436-6868 192.168.76.0/24 created
	I0207 19:47:58.689716  214480 kic.go:106] calculated static IP "192.168.76.2" for the "old-k8s-version-20220207194436-6868" container
	I0207 19:47:58.689786  214480 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0207 19:47:58.731518  214480 cli_runner.go:133] Run: docker volume create old-k8s-version-20220207194436-6868 --label name.minikube.sigs.k8s.io=old-k8s-version-20220207194436-6868 --label created_by.minikube.sigs.k8s.io=true
	I0207 19:47:58.765631  214480 oci.go:102] Successfully created a docker volume old-k8s-version-20220207194436-6868
	I0207 19:47:58.765714  214480 cli_runner.go:133] Run: docker run --rm --name old-k8s-version-20220207194436-6868-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220207194436-6868 --entrypoint /usr/bin/test -v old-k8s-version-20220207194436-6868:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -d /var/lib
	I0207 19:47:59.529710  214480 oci.go:106] Successfully prepared a docker volume old-k8s-version-20220207194436-6868
	I0207 19:47:59.529796  214480 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0207 19:47:59.529826  214480 kic.go:179] Starting extracting preloaded images to volume ...
	I0207 19:47:59.529896  214480 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220207194436-6868:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0207 19:48:05.710689  214480 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220207194436-6868:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir: (6.18074605s)
	I0207 19:48:05.710733  214480 kic.go:188] duration metric: took 6.180904 seconds to extract preloaded images to volume
	W0207 19:48:05.710786  214480 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0207 19:48:05.710796  214480 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0207 19:48:05.710853  214480 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0207 19:48:05.837904  214480 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220207194436-6868 --name old-k8s-version-20220207194436-6868 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220207194436-6868 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220207194436-6868 --network old-k8s-version-20220207194436-6868 --ip 192.168.76.2 --volume old-k8s-version-20220207194436-6868:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8
	W0207 19:48:05.916183  214480 cli_runner.go:180] docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220207194436-6868 --name old-k8s-version-20220207194436-6868 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220207194436-6868 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220207194436-6868 --network old-k8s-version-20220207194436-6868 --ip 192.168.76.2 --volume old-k8s-version-20220207194436-6868:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 returned with exit code 125
	I0207 19:48:05.916263  214480 client.go:171] LocalClient.Create took 7.418385117s
	I0207 19:48:07.916752  214480 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0207 19:48:07.916846  214480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220207194436-6868
	W0207 19:48:07.964798  214480 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220207194436-6868 returned with exit code 1
	I0207 19:48:07.964926  214480 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 19:48:08.196329  214480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220207194436-6868
	W0207 19:48:08.240153  214480 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220207194436-6868 returned with exit code 1
	I0207 19:48:08.240272  214480 retry.go:31] will retry after 445.058653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 19:48:08.685907  214480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220207194436-6868
	W0207 19:48:08.724379  214480 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220207194436-6868 returned with exit code 1
	I0207 19:48:08.724493  214480 retry.go:31] will retry after 318.170823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 19:48:09.043035  214480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220207194436-6868
	W0207 19:48:09.083382  214480 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220207194436-6868 returned with exit code 1
	I0207 19:48:09.083509  214480 retry.go:31] will retry after 553.938121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 19:48:09.637958  214480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220207194436-6868
	W0207 19:48:09.677524  214480 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220207194436-6868 returned with exit code 1
	W0207 19:48:09.677627  214480 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0207 19:48:09.677642  214480 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 19:48:09.677652  214480 start.go:129] duration metric: createHost completed in 11.182099346s
	I0207 19:48:09.677697  214480 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0207 19:48:09.677723  214480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220207194436-6868
	W0207 19:48:09.716241  214480 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220207194436-6868 returned with exit code 1
	I0207 19:48:09.716364  214480 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 19:48:09.916805  214480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220207194436-6868
	W0207 19:48:09.950447  214480 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220207194436-6868 returned with exit code 1
	I0207 19:48:09.950558  214480 retry.go:31] will retry after 380.704736ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 19:48:10.331949  214480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220207194436-6868
	W0207 19:48:10.365928  214480 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220207194436-6868 returned with exit code 1
	I0207 19:48:10.366019  214480 retry.go:31] will retry after 738.922478ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 19:48:11.105439  214480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220207194436-6868
	W0207 19:48:11.139411  214480 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220207194436-6868 returned with exit code 1
	W0207 19:48:11.139520  214480 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0207 19:48:11.139536  214480 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0207 19:48:11.139547  214480 fix.go:57] fixHost completed within 3m17.664866836s
	I0207 19:48:11.139559  214480 start.go:80] releasing machines lock for "old-k8s-version-20220207194436-6868", held for 3m17.664904449s
	W0207 19:48:11.139792  214480 out.go:241] * Failed to start docker container. Running "minikube delete -p old-k8s-version-20220207194436-6868" may fix it: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220207194436-6868 --name old-k8s-version-20220207194436-6868 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220207194436-6868 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220207194436-6868 --network old-k8s-version-20220207194436-6868 --ip 192.168.76.2 --volume old-k8s-version-20220207194436-6868:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s
-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	710fd7fa598f32fe69c0d8fe72e0d21121ad230e6997ad139b53f6f3b52adfd6
	
	stderr:
	docker: Error response from daemon: network old-k8s-version-20220207194436-6868 not found.
	
	* Failed to start docker container. Running "minikube delete -p old-k8s-version-20220207194436-6868" may fix it: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220207194436-6868 --name old-k8s-version-20220207194436-6868 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220207194436-6868 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220207194436-6868 --network old-k8s-version-20220207194436-6868 --ip 192.168.76.2 --volume old-k8s-version-20220207194436-6868:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-164382380
6-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	710fd7fa598f32fe69c0d8fe72e0d21121ad230e6997ad139b53f6f3b52adfd6
	
	stderr:
	docker: Error response from daemon: network old-k8s-version-20220207194436-6868 not found.
	
	I0207 19:48:11.344375  214480 out.go:176] 
	W0207 19:48:11.344597  214480 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220207194436-6868 --name old-k8s-version-20220207194436-6868 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220207194436-6868 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220207194436-6868 --network old-k8s-version-20220207194436-6868 --ip 192.168.76.2 --volume old-k8s-version-20220207194436-6868:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:
9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	710fd7fa598f32fe69c0d8fe72e0d21121ad230e6997ad139b53f6f3b52adfd6
	
	stderr:
	docker: Error response from daemon: network old-k8s-version-20220207194436-6868 not found.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220207194436-6868 --name old-k8s-version-20220207194436-6868 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220207194436-6868 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220207194436-6868 --network old-k8s-version-20220207194436-6868 --ip 192.168.76.2 --volume old-k8s-version-20220207194436-6868:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e
3e839a8ae15cbe69f5b4a8: exit status 125
	stdout:
	710fd7fa598f32fe69c0d8fe72e0d21121ad230e6997ad139b53f6f3b52adfd6
	
	stderr:
	docker: Error response from daemon: network old-k8s-version-20220207194436-6868 not found.
	
	W0207 19:48:11.344616  214480 out.go:241] * 
	* 
	W0207 19:48:11.345506  214480 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0207 19:48:11.514664  214480 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:173: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-20220207194436-6868 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20220207194436-6868
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20220207194436-6868:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "710fd7fa598f32fe69c0d8fe72e0d21121ad230e6997ad139b53f6f3b52adfd6",
	        "Created": "2022-02-07T19:48:05.877064608Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "created",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 128,
	            "Error": "network old-k8s-version-20220207194436-6868 not found",
	            "StartedAt": "0001-01-01T00:00:00Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:50384aa4ebef3abc81b3b83296147bd747dcd04d4644d8f3150476ffa93e6889",
	        "ResolvConfPath": "",
	        "HostnamePath": "",
	        "HostsPath": "",
	        "LogPath": "",
	        "Name": "/old-k8s-version-20220207194436-6868",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220207194436-6868:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220207194436-6868",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/087872f9edaf90ac3ade99ce6890e05c35f4c77e965f8e7184fb1ce06068554b-init/diff:/var/lib/docker/overlay2/40e36e3239cb5157195ce223d31e5e12299d283013c03c510d3e8a2442fd2c92/diff:/var/lib/docker/overlay2/21617b479acf17653e84d6ae3cb822db5c7eac887dbffb288d5171c45b712c0d/diff:/var/lib/docker/overlay2/2dbc01d4f6abd3524aaa75f3f362b44291e07e9adaadba323bd734a77bfa9c6a/diff:/var/lib/docker/overlay2/1c3968298265a3203685852a8c6fa391e12253b485741654087afb7a90fc1d77/diff:/var/lib/docker/overlay2/6a2a8c5d6504d982da53621a1d6f96ee3336c19fd9f294d5b418cc706dc8944c/diff:/var/lib/docker/overlay2/7e7a079457982ab93f984a944ffef8ef6a0aedcf9ae87dd48d2bfaebfa401212/diff:/var/lib/docker/overlay2/fae622e4af16ac53e0d1ab6e7ec0b23cddddaf4c7b9c906b18db9f5a7421f38d/diff:/var/lib/docker/overlay2/d4355831ba7c15624e8cc51f64415d91ec01d79fc16f0d8cce7cf9819963c9be/diff:/var/lib/docker/overlay2/5453a1a1be3960eaab33a3909934d20d3b1f1d0bd01d04e14158548e63d9ccc7/diff:/var/lib/docker/overlay2/b7f7aa
f98954a80aedd0a57753ced767fc40fd261655975f8bb2201f533af508/diff:/var/lib/docker/overlay2/582d45c1dfa23d0fcf227689ca05cc54f60cdf8562c7df098f15c0596f9f3b84/diff:/var/lib/docker/overlay2/97921dc2ea2a25724aa5bc8ee71d705ad02bb5de7327e9125b14e7ed3e0a36d9/diff:/var/lib/docker/overlay2/8994377961c9baa6fdb05a49604c2c1639c56f117040ce16cfcd7068142802d0/diff:/var/lib/docker/overlay2/741d31f19db93cecb47cf3edf12208c50adfa881f267e46fc2200168359e063e/diff:/var/lib/docker/overlay2/be1305b93735b2cb41c1050a14599a08f09c103ef39104313e5c6ea7783a25d0/diff:/var/lib/docker/overlay2/d2c6406a44063188bff06eacfb837bce43d713aa16c08f434607947a2e2aeb2d/diff:/var/lib/docker/overlay2/2354e37c2793df3a7faa18542aa5d3030952a40a0dd4361a9ad132d57efd3dea/diff:/var/lib/docker/overlay2/82b71b4192e75ce019792a62b12c4d48d3352cd8295673aa7b75c929d0c7f4ae/diff:/var/lib/docker/overlay2/6c62b320b27e5a2c13eea8d9b6e430fb56485a76ac7bf171136df923f96334b6/diff:/var/lib/docker/overlay2/f65c213239b185d01f445a11f073325d0aa4a30296ee7125aeec4abc8b80289e/diff:/var/lib/d
ocker/overlay2/f4ab87d7e9bbbf343135421546bd636317abbc0406bd09bc0e7ded9abb5ffe07/diff:/var/lib/docker/overlay2/c962dce8dce172c66b9fae4d0533e0b9eb6f537f99f2ae091522820f3437e87b/diff:/var/lib/docker/overlay2/c5f3b750eb1f675794758011aa1f3cf1afaaea6aeabaacfa7127c4e8eb3e9d3f/diff:/var/lib/docker/overlay2/165d7a930e1764d6612409e5b2abab0706c771e2ea6d53d26f379e5c8420b768/diff:/var/lib/docker/overlay2/c639594ead9cef5a157dcd6c5d3b58acfb87a1b54e09f09a89e5efe42a0250cb/diff:/var/lib/docker/overlay2/22d4ffdeda2486e79e77cdf6b2966c4e3f7a7c1d385f6914cf9abbbafd681fc5/diff:/var/lib/docker/overlay2/06347ddaa20c499bc26010d7a1ef1ac9c484d7088bac49bc47d017af272c5c8b/diff:/var/lib/docker/overlay2/4039a84be3e1b1c0c36b2bd5611308130efae8b5d3993d514489c326b58181a2/diff:/var/lib/docker/overlay2/00ba3d7351a8d15c1f38c8a5267ac7da1315950a1583dfe162bbe06e240d4e4e/diff:/var/lib/docker/overlay2/b66091d419eb3b0a03f2363973ab6750206d5cb1e33c6a80f22ac7b1b1c20015/diff:/var/lib/docker/overlay2/60a3c3f90313e57450868dd29163b9746391dbc376387ee61b371e7753d
2a9ed/diff:/var/lib/docker/overlay2/a4077b320de983a23a73f3509a3b65aa35c912b90e61cf3446d45334952197cc/diff:/var/lib/docker/overlay2/87466c009c98c77512f99106ac7b5b4682f6d57d0895993878a55843dfde4f0a/diff:/var/lib/docker/overlay2/be9cd77fbde8968efd17d63e6bf10bab9ae227bf6efd5ff15488effa8ed534f4/diff:/var/lib/docker/overlay2/692a8a7c4d738fb8caee425a6243fdaf5a5c4e7fdb6bda1969cba3c7099060d9/diff:/var/lib/docker/overlay2/90779bbe942cebdf0402a74acd25799917448b7948891aaf60636bbb4410e2d5/diff:/var/lib/docker/overlay2/f403aa656638a54017c9beeb448df9b3957711bbf52e5e92e279dd6a8e3a1a7b/diff:/var/lib/docker/overlay2/3e3a096efd54b9035c41e17e3c469d848ce1cddc9ad895ed288525a89e7d5153/diff:/var/lib/docker/overlay2/71a400a65bb51da094b9d5b672bf3e4973957a356b0480e8fd559aa527c64638/diff:/var/lib/docker/overlay2/5ecbee969df6610687807dc48c221a03964af0e197a0b8f0b5c38b70ab38cf4c/diff:/var/lib/docker/overlay2/1f806f3d9e1cd280380c82dd805cd7489ed4ed1d66b824ad880754d19b08dfa2/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/087872f9edaf90ac3ade99ce6890e05c35f4c77e965f8e7184fb1ce06068554b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/087872f9edaf90ac3ade99ce6890e05c35f4c77e965f8e7184fb1ce06068554b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/087872f9edaf90ac3ade99ce6890e05c35f4c77e965f8e7184fb1ce06068554b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220207194436-6868",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220207194436-6868/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220207194436-6868",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220207194436-6868",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220207194436-6868",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220207194436-6868": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220207194436-6868 -n old-k8s-version-20220207194436-6868
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220207194436-6868 -n old-k8s-version-20220207194436-6868: exit status 7 (180.458202ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "old-k8s-version-20220207194436-6868" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (215.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20220207194436-6868 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220207194436-6868 create -f testdata/busybox.yaml: exit status 1 (39.533804ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220207194436-6868" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:181: kubectl --context old-k8s-version-20220207194436-6868 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20220207194436-6868
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20220207194436-6868:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "710fd7fa598f32fe69c0d8fe72e0d21121ad230e6997ad139b53f6f3b52adfd6",
	        "Created": "2022-02-07T19:48:05.877064608Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "created",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 128,
	            "Error": "network old-k8s-version-20220207194436-6868 not found",
	            "StartedAt": "0001-01-01T00:00:00Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:50384aa4ebef3abc81b3b83296147bd747dcd04d4644d8f3150476ffa93e6889",
	        "ResolvConfPath": "",
	        "HostnamePath": "",
	        "HostsPath": "",
	        "LogPath": "",
	        "Name": "/old-k8s-version-20220207194436-6868",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220207194436-6868:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220207194436-6868",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/087872f9edaf90ac3ade99ce6890e05c35f4c77e965f8e7184fb1ce06068554b-init/diff:/var/lib/docker/overlay2/40e36e3239cb5157195ce223d31e5e12299d283013c03c510d3e8a2442fd2c92/diff:/var/lib/docker/overlay2/21617b479acf17653e84d6ae3cb822db5c7eac887dbffb288d5171c45b712c0d/diff:/var/lib/docker/overlay2/2dbc01d4f6abd3524aaa75f3f362b44291e07e9adaadba323bd734a77bfa9c6a/diff:/var/lib/docker/overlay2/1c3968298265a3203685852a8c6fa391e12253b485741654087afb7a90fc1d77/diff:/var/lib/docker/overlay2/6a2a8c5d6504d982da53621a1d6f96ee3336c19fd9f294d5b418cc706dc8944c/diff:/var/lib/docker/overlay2/7e7a079457982ab93f984a944ffef8ef6a0aedcf9ae87dd48d2bfaebfa401212/diff:/var/lib/docker/overlay2/fae622e4af16ac53e0d1ab6e7ec0b23cddddaf4c7b9c906b18db9f5a7421f38d/diff:/var/lib/docker/overlay2/d4355831ba7c15624e8cc51f64415d91ec01d79fc16f0d8cce7cf9819963c9be/diff:/var/lib/docker/overlay2/5453a1a1be3960eaab33a3909934d20d3b1f1d0bd01d04e14158548e63d9ccc7/diff:/var/lib/docker/overlay2/b7f7aa
f98954a80aedd0a57753ced767fc40fd261655975f8bb2201f533af508/diff:/var/lib/docker/overlay2/582d45c1dfa23d0fcf227689ca05cc54f60cdf8562c7df098f15c0596f9f3b84/diff:/var/lib/docker/overlay2/97921dc2ea2a25724aa5bc8ee71d705ad02bb5de7327e9125b14e7ed3e0a36d9/diff:/var/lib/docker/overlay2/8994377961c9baa6fdb05a49604c2c1639c56f117040ce16cfcd7068142802d0/diff:/var/lib/docker/overlay2/741d31f19db93cecb47cf3edf12208c50adfa881f267e46fc2200168359e063e/diff:/var/lib/docker/overlay2/be1305b93735b2cb41c1050a14599a08f09c103ef39104313e5c6ea7783a25d0/diff:/var/lib/docker/overlay2/d2c6406a44063188bff06eacfb837bce43d713aa16c08f434607947a2e2aeb2d/diff:/var/lib/docker/overlay2/2354e37c2793df3a7faa18542aa5d3030952a40a0dd4361a9ad132d57efd3dea/diff:/var/lib/docker/overlay2/82b71b4192e75ce019792a62b12c4d48d3352cd8295673aa7b75c929d0c7f4ae/diff:/var/lib/docker/overlay2/6c62b320b27e5a2c13eea8d9b6e430fb56485a76ac7bf171136df923f96334b6/diff:/var/lib/docker/overlay2/f65c213239b185d01f445a11f073325d0aa4a30296ee7125aeec4abc8b80289e/diff:/var/lib/d
ocker/overlay2/f4ab87d7e9bbbf343135421546bd636317abbc0406bd09bc0e7ded9abb5ffe07/diff:/var/lib/docker/overlay2/c962dce8dce172c66b9fae4d0533e0b9eb6f537f99f2ae091522820f3437e87b/diff:/var/lib/docker/overlay2/c5f3b750eb1f675794758011aa1f3cf1afaaea6aeabaacfa7127c4e8eb3e9d3f/diff:/var/lib/docker/overlay2/165d7a930e1764d6612409e5b2abab0706c771e2ea6d53d26f379e5c8420b768/diff:/var/lib/docker/overlay2/c639594ead9cef5a157dcd6c5d3b58acfb87a1b54e09f09a89e5efe42a0250cb/diff:/var/lib/docker/overlay2/22d4ffdeda2486e79e77cdf6b2966c4e3f7a7c1d385f6914cf9abbbafd681fc5/diff:/var/lib/docker/overlay2/06347ddaa20c499bc26010d7a1ef1ac9c484d7088bac49bc47d017af272c5c8b/diff:/var/lib/docker/overlay2/4039a84be3e1b1c0c36b2bd5611308130efae8b5d3993d514489c326b58181a2/diff:/var/lib/docker/overlay2/00ba3d7351a8d15c1f38c8a5267ac7da1315950a1583dfe162bbe06e240d4e4e/diff:/var/lib/docker/overlay2/b66091d419eb3b0a03f2363973ab6750206d5cb1e33c6a80f22ac7b1b1c20015/diff:/var/lib/docker/overlay2/60a3c3f90313e57450868dd29163b9746391dbc376387ee61b371e7753d
2a9ed/diff:/var/lib/docker/overlay2/a4077b320de983a23a73f3509a3b65aa35c912b90e61cf3446d45334952197cc/diff:/var/lib/docker/overlay2/87466c009c98c77512f99106ac7b5b4682f6d57d0895993878a55843dfde4f0a/diff:/var/lib/docker/overlay2/be9cd77fbde8968efd17d63e6bf10bab9ae227bf6efd5ff15488effa8ed534f4/diff:/var/lib/docker/overlay2/692a8a7c4d738fb8caee425a6243fdaf5a5c4e7fdb6bda1969cba3c7099060d9/diff:/var/lib/docker/overlay2/90779bbe942cebdf0402a74acd25799917448b7948891aaf60636bbb4410e2d5/diff:/var/lib/docker/overlay2/f403aa656638a54017c9beeb448df9b3957711bbf52e5e92e279dd6a8e3a1a7b/diff:/var/lib/docker/overlay2/3e3a096efd54b9035c41e17e3c469d848ce1cddc9ad895ed288525a89e7d5153/diff:/var/lib/docker/overlay2/71a400a65bb51da094b9d5b672bf3e4973957a356b0480e8fd559aa527c64638/diff:/var/lib/docker/overlay2/5ecbee969df6610687807dc48c221a03964af0e197a0b8f0b5c38b70ab38cf4c/diff:/var/lib/docker/overlay2/1f806f3d9e1cd280380c82dd805cd7489ed4ed1d66b824ad880754d19b08dfa2/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/087872f9edaf90ac3ade99ce6890e05c35f4c77e965f8e7184fb1ce06068554b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/087872f9edaf90ac3ade99ce6890e05c35f4c77e965f8e7184fb1ce06068554b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/087872f9edaf90ac3ade99ce6890e05c35f4c77e965f8e7184fb1ce06068554b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220207194436-6868",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220207194436-6868/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220207194436-6868",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220207194436-6868",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220207194436-6868",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220207194436-6868": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220207194436-6868 -n old-k8s-version-20220207194436-6868
E0207 19:48:11.853925    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/functional-20220207192144-6868/client.crt: no such file or directory
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220207194436-6868 -n old-k8s-version-20220207194436-6868: exit status 7 (151.455505ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "old-k8s-version-20220207194436-6868" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20220207194436-6868
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20220207194436-6868:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "710fd7fa598f32fe69c0d8fe72e0d21121ad230e6997ad139b53f6f3b52adfd6",
	        "Created": "2022-02-07T19:48:05.877064608Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "created",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 128,
	            "Error": "network old-k8s-version-20220207194436-6868 not found",
	            "StartedAt": "0001-01-01T00:00:00Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:50384aa4ebef3abc81b3b83296147bd747dcd04d4644d8f3150476ffa93e6889",
	        "ResolvConfPath": "",
	        "HostnamePath": "",
	        "HostsPath": "",
	        "LogPath": "",
	        "Name": "/old-k8s-version-20220207194436-6868",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220207194436-6868:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220207194436-6868",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/087872f9edaf90ac3ade99ce6890e05c35f4c77e965f8e7184fb1ce06068554b-init/diff:/var/lib/docker/overlay2/40e36e3239cb5157195ce223d31e5e12299d283013c03c510d3e8a2442fd2c92/diff:/var/lib/docker/overlay2/21617b479acf17653e84d6ae3cb822db5c7eac887dbffb288d5171c45b712c0d/diff:/var/lib/docker/overlay2/2dbc01d4f6abd3524aaa75f3f362b44291e07e9adaadba323bd734a77bfa9c6a/diff:/var/lib/docker/overlay2/1c3968298265a3203685852a8c6fa391e12253b485741654087afb7a90fc1d77/diff:/var/lib/docker/overlay2/6a2a8c5d6504d982da53621a1d6f96ee3336c19fd9f294d5b418cc706dc8944c/diff:/var/lib/docker/overlay2/7e7a079457982ab93f984a944ffef8ef6a0aedcf9ae87dd48d2bfaebfa401212/diff:/var/lib/docker/overlay2/fae622e4af16ac53e0d1ab6e7ec0b23cddddaf4c7b9c906b18db9f5a7421f38d/diff:/var/lib/docker/overlay2/d4355831ba7c15624e8cc51f64415d91ec01d79fc16f0d8cce7cf9819963c9be/diff:/var/lib/docker/overlay2/5453a1a1be3960eaab33a3909934d20d3b1f1d0bd01d04e14158548e63d9ccc7/diff:/var/lib/docker/overlay2/b7f7aa
f98954a80aedd0a57753ced767fc40fd261655975f8bb2201f533af508/diff:/var/lib/docker/overlay2/582d45c1dfa23d0fcf227689ca05cc54f60cdf8562c7df098f15c0596f9f3b84/diff:/var/lib/docker/overlay2/97921dc2ea2a25724aa5bc8ee71d705ad02bb5de7327e9125b14e7ed3e0a36d9/diff:/var/lib/docker/overlay2/8994377961c9baa6fdb05a49604c2c1639c56f117040ce16cfcd7068142802d0/diff:/var/lib/docker/overlay2/741d31f19db93cecb47cf3edf12208c50adfa881f267e46fc2200168359e063e/diff:/var/lib/docker/overlay2/be1305b93735b2cb41c1050a14599a08f09c103ef39104313e5c6ea7783a25d0/diff:/var/lib/docker/overlay2/d2c6406a44063188bff06eacfb837bce43d713aa16c08f434607947a2e2aeb2d/diff:/var/lib/docker/overlay2/2354e37c2793df3a7faa18542aa5d3030952a40a0dd4361a9ad132d57efd3dea/diff:/var/lib/docker/overlay2/82b71b4192e75ce019792a62b12c4d48d3352cd8295673aa7b75c929d0c7f4ae/diff:/var/lib/docker/overlay2/6c62b320b27e5a2c13eea8d9b6e430fb56485a76ac7bf171136df923f96334b6/diff:/var/lib/docker/overlay2/f65c213239b185d01f445a11f073325d0aa4a30296ee7125aeec4abc8b80289e/diff:/var/lib/d
ocker/overlay2/f4ab87d7e9bbbf343135421546bd636317abbc0406bd09bc0e7ded9abb5ffe07/diff:/var/lib/docker/overlay2/c962dce8dce172c66b9fae4d0533e0b9eb6f537f99f2ae091522820f3437e87b/diff:/var/lib/docker/overlay2/c5f3b750eb1f675794758011aa1f3cf1afaaea6aeabaacfa7127c4e8eb3e9d3f/diff:/var/lib/docker/overlay2/165d7a930e1764d6612409e5b2abab0706c771e2ea6d53d26f379e5c8420b768/diff:/var/lib/docker/overlay2/c639594ead9cef5a157dcd6c5d3b58acfb87a1b54e09f09a89e5efe42a0250cb/diff:/var/lib/docker/overlay2/22d4ffdeda2486e79e77cdf6b2966c4e3f7a7c1d385f6914cf9abbbafd681fc5/diff:/var/lib/docker/overlay2/06347ddaa20c499bc26010d7a1ef1ac9c484d7088bac49bc47d017af272c5c8b/diff:/var/lib/docker/overlay2/4039a84be3e1b1c0c36b2bd5611308130efae8b5d3993d514489c326b58181a2/diff:/var/lib/docker/overlay2/00ba3d7351a8d15c1f38c8a5267ac7da1315950a1583dfe162bbe06e240d4e4e/diff:/var/lib/docker/overlay2/b66091d419eb3b0a03f2363973ab6750206d5cb1e33c6a80f22ac7b1b1c20015/diff:/var/lib/docker/overlay2/60a3c3f90313e57450868dd29163b9746391dbc376387ee61b371e7753d
2a9ed/diff:/var/lib/docker/overlay2/a4077b320de983a23a73f3509a3b65aa35c912b90e61cf3446d45334952197cc/diff:/var/lib/docker/overlay2/87466c009c98c77512f99106ac7b5b4682f6d57d0895993878a55843dfde4f0a/diff:/var/lib/docker/overlay2/be9cd77fbde8968efd17d63e6bf10bab9ae227bf6efd5ff15488effa8ed534f4/diff:/var/lib/docker/overlay2/692a8a7c4d738fb8caee425a6243fdaf5a5c4e7fdb6bda1969cba3c7099060d9/diff:/var/lib/docker/overlay2/90779bbe942cebdf0402a74acd25799917448b7948891aaf60636bbb4410e2d5/diff:/var/lib/docker/overlay2/f403aa656638a54017c9beeb448df9b3957711bbf52e5e92e279dd6a8e3a1a7b/diff:/var/lib/docker/overlay2/3e3a096efd54b9035c41e17e3c469d848ce1cddc9ad895ed288525a89e7d5153/diff:/var/lib/docker/overlay2/71a400a65bb51da094b9d5b672bf3e4973957a356b0480e8fd559aa527c64638/diff:/var/lib/docker/overlay2/5ecbee969df6610687807dc48c221a03964af0e197a0b8f0b5c38b70ab38cf4c/diff:/var/lib/docker/overlay2/1f806f3d9e1cd280380c82dd805cd7489ed4ed1d66b824ad880754d19b08dfa2/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/087872f9edaf90ac3ade99ce6890e05c35f4c77e965f8e7184fb1ce06068554b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/087872f9edaf90ac3ade99ce6890e05c35f4c77e965f8e7184fb1ce06068554b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/087872f9edaf90ac3ade99ce6890e05c35f4c77e965f8e7184fb1ce06068554b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220207194436-6868",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220207194436-6868/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220207194436-6868",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220207194436-6868",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220207194436-6868",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220207194436-6868": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220207194436-6868 -n old-k8s-version-20220207194436-6868
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220207194436-6868 -n old-k8s-version-20220207194436-6868: exit status 7 (114.544025ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "old-k8s-version-20220207194436-6868" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20220207194436-6868 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context old-k8s-version-20220207194436-6868 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:200: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220207194436-6868 describe deploy/metrics-server -n kube-system: exit status 1 (41.659148ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220207194436-6868" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:202: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-20220207194436-6868 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:206: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20220207194436-6868
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20220207194436-6868:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "710fd7fa598f32fe69c0d8fe72e0d21121ad230e6997ad139b53f6f3b52adfd6",
	        "Created": "2022-02-07T19:48:05.877064608Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "created",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 128,
	            "Error": "network old-k8s-version-20220207194436-6868 not found",
	            "StartedAt": "0001-01-01T00:00:00Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:50384aa4ebef3abc81b3b83296147bd747dcd04d4644d8f3150476ffa93e6889",
	        "ResolvConfPath": "",
	        "HostnamePath": "",
	        "HostsPath": "",
	        "LogPath": "",
	        "Name": "/old-k8s-version-20220207194436-6868",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220207194436-6868:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220207194436-6868",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/087872f9edaf90ac3ade99ce6890e05c35f4c77e965f8e7184fb1ce06068554b-init/diff:/var/lib/docker/overlay2/40e36e3239cb5157195ce223d31e5e12299d283013c03c510d3e8a2442fd2c92/diff:/var/lib/docker/overlay2/21617b479acf17653e84d6ae3cb822db5c7eac887dbffb288d5171c45b712c0d/diff:/var/lib/docker/overlay2/2dbc01d4f6abd3524aaa75f3f362b44291e07e9adaadba323bd734a77bfa9c6a/diff:/var/lib/docker/overlay2/1c3968298265a3203685852a8c6fa391e12253b485741654087afb7a90fc1d77/diff:/var/lib/docker/overlay2/6a2a8c5d6504d982da53621a1d6f96ee3336c19fd9f294d5b418cc706dc8944c/diff:/var/lib/docker/overlay2/7e7a079457982ab93f984a944ffef8ef6a0aedcf9ae87dd48d2bfaebfa401212/diff:/var/lib/docker/overlay2/fae622e4af16ac53e0d1ab6e7ec0b23cddddaf4c7b9c906b18db9f5a7421f38d/diff:/var/lib/docker/overlay2/d4355831ba7c15624e8cc51f64415d91ec01d79fc16f0d8cce7cf9819963c9be/diff:/var/lib/docker/overlay2/5453a1a1be3960eaab33a3909934d20d3b1f1d0bd01d04e14158548e63d9ccc7/diff:/var/lib/docker/overlay2/b7f7aa
f98954a80aedd0a57753ced767fc40fd261655975f8bb2201f533af508/diff:/var/lib/docker/overlay2/582d45c1dfa23d0fcf227689ca05cc54f60cdf8562c7df098f15c0596f9f3b84/diff:/var/lib/docker/overlay2/97921dc2ea2a25724aa5bc8ee71d705ad02bb5de7327e9125b14e7ed3e0a36d9/diff:/var/lib/docker/overlay2/8994377961c9baa6fdb05a49604c2c1639c56f117040ce16cfcd7068142802d0/diff:/var/lib/docker/overlay2/741d31f19db93cecb47cf3edf12208c50adfa881f267e46fc2200168359e063e/diff:/var/lib/docker/overlay2/be1305b93735b2cb41c1050a14599a08f09c103ef39104313e5c6ea7783a25d0/diff:/var/lib/docker/overlay2/d2c6406a44063188bff06eacfb837bce43d713aa16c08f434607947a2e2aeb2d/diff:/var/lib/docker/overlay2/2354e37c2793df3a7faa18542aa5d3030952a40a0dd4361a9ad132d57efd3dea/diff:/var/lib/docker/overlay2/82b71b4192e75ce019792a62b12c4d48d3352cd8295673aa7b75c929d0c7f4ae/diff:/var/lib/docker/overlay2/6c62b320b27e5a2c13eea8d9b6e430fb56485a76ac7bf171136df923f96334b6/diff:/var/lib/docker/overlay2/f65c213239b185d01f445a11f073325d0aa4a30296ee7125aeec4abc8b80289e/diff:/var/lib/d
ocker/overlay2/f4ab87d7e9bbbf343135421546bd636317abbc0406bd09bc0e7ded9abb5ffe07/diff:/var/lib/docker/overlay2/c962dce8dce172c66b9fae4d0533e0b9eb6f537f99f2ae091522820f3437e87b/diff:/var/lib/docker/overlay2/c5f3b750eb1f675794758011aa1f3cf1afaaea6aeabaacfa7127c4e8eb3e9d3f/diff:/var/lib/docker/overlay2/165d7a930e1764d6612409e5b2abab0706c771e2ea6d53d26f379e5c8420b768/diff:/var/lib/docker/overlay2/c639594ead9cef5a157dcd6c5d3b58acfb87a1b54e09f09a89e5efe42a0250cb/diff:/var/lib/docker/overlay2/22d4ffdeda2486e79e77cdf6b2966c4e3f7a7c1d385f6914cf9abbbafd681fc5/diff:/var/lib/docker/overlay2/06347ddaa20c499bc26010d7a1ef1ac9c484d7088bac49bc47d017af272c5c8b/diff:/var/lib/docker/overlay2/4039a84be3e1b1c0c36b2bd5611308130efae8b5d3993d514489c326b58181a2/diff:/var/lib/docker/overlay2/00ba3d7351a8d15c1f38c8a5267ac7da1315950a1583dfe162bbe06e240d4e4e/diff:/var/lib/docker/overlay2/b66091d419eb3b0a03f2363973ab6750206d5cb1e33c6a80f22ac7b1b1c20015/diff:/var/lib/docker/overlay2/60a3c3f90313e57450868dd29163b9746391dbc376387ee61b371e7753d
2a9ed/diff:/var/lib/docker/overlay2/a4077b320de983a23a73f3509a3b65aa35c912b90e61cf3446d45334952197cc/diff:/var/lib/docker/overlay2/87466c009c98c77512f99106ac7b5b4682f6d57d0895993878a55843dfde4f0a/diff:/var/lib/docker/overlay2/be9cd77fbde8968efd17d63e6bf10bab9ae227bf6efd5ff15488effa8ed534f4/diff:/var/lib/docker/overlay2/692a8a7c4d738fb8caee425a6243fdaf5a5c4e7fdb6bda1969cba3c7099060d9/diff:/var/lib/docker/overlay2/90779bbe942cebdf0402a74acd25799917448b7948891aaf60636bbb4410e2d5/diff:/var/lib/docker/overlay2/f403aa656638a54017c9beeb448df9b3957711bbf52e5e92e279dd6a8e3a1a7b/diff:/var/lib/docker/overlay2/3e3a096efd54b9035c41e17e3c469d848ce1cddc9ad895ed288525a89e7d5153/diff:/var/lib/docker/overlay2/71a400a65bb51da094b9d5b672bf3e4973957a356b0480e8fd559aa527c64638/diff:/var/lib/docker/overlay2/5ecbee969df6610687807dc48c221a03964af0e197a0b8f0b5c38b70ab38cf4c/diff:/var/lib/docker/overlay2/1f806f3d9e1cd280380c82dd805cd7489ed4ed1d66b824ad880754d19b08dfa2/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/087872f9edaf90ac3ade99ce6890e05c35f4c77e965f8e7184fb1ce06068554b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/087872f9edaf90ac3ade99ce6890e05c35f4c77e965f8e7184fb1ce06068554b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/087872f9edaf90ac3ade99ce6890e05c35f4c77e965f8e7184fb1ce06068554b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220207194436-6868",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220207194436-6868/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220207194436-6868",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220207194436-6868",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220207194436-6868",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220207194436-6868": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220207194436-6868 -n old-k8s-version-20220207194436-6868
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220207194436-6868 -n old-k8s-version-20220207194436-6868: exit status 7 (109.112503ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "old-k8s-version-20220207194436-6868" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (182.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20220207194436-6868 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:213: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-20220207194436-6868 --alsologtostderr -v=3: exit status 82 (3m2.793863736s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-20220207194436-6868"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0207 19:48:12.805349  236648 out.go:297] Setting OutFile to fd 1 ...
	I0207 19:48:12.831185  236648 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:48:12.831211  236648 out.go:310] Setting ErrFile to fd 2...
	I0207 19:48:12.831217  236648 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:48:12.831406  236648 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/bin
	I0207 19:48:12.831760  236648 out.go:304] Setting JSON to false
	I0207 19:48:12.831885  236648 mustload.go:65] Loading cluster: old-k8s-version-20220207194436-6868
	I0207 19:48:12.833265  236648 config.go:176] Loaded profile config "old-k8s-version-20220207194436-6868": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0207 19:48:12.833419  236648 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/old-k8s-version-20220207194436-6868/config.json ...
	I0207 19:48:12.840415  236648 mustload.go:65] Loading cluster: old-k8s-version-20220207194436-6868
	I0207 19:48:12.840645  236648 config.go:176] Loaded profile config "old-k8s-version-20220207194436-6868": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0207 19:48:12.840715  236648 stop.go:39] StopHost: old-k8s-version-20220207194436-6868
	I0207 19:48:12.946696  236648 out.go:176] * Stopping node "old-k8s-version-20220207194436-6868"  ...
	I0207 19:48:12.946819  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:48:12.990011  236648 stop.go:79] host is in state 
	I0207 19:48:12.990071  236648 main.go:130] libmachine: Stopping "old-k8s-version-20220207194436-6868"...
	I0207 19:48:12.990129  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:48:13.023733  236648 kic_runner.go:93] Run: systemctl --version
	I0207 19:48:13.023756  236648 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20220207194436-6868 systemctl --version]
	I0207 19:48:13.070158  236648 kic_runner.go:93] Run: sudo service kubelet stop
	I0207 19:48:13.070181  236648 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20220207194436-6868 sudo service kubelet stop]
	I0207 19:48:13.107343  236648 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container 710fd7fa598f32fe69c0d8fe72e0d21121ad230e6997ad139b53f6f3b52adfd6 is not running
	
	** /stderr **
	W0207 19:48:13.107369  236648 kic.go:443] couldn't stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 710fd7fa598f32fe69c0d8fe72e0d21121ad230e6997ad139b53f6f3b52adfd6 is not running
	I0207 19:48:13.107448  236648 kic_runner.go:93] Run: sudo service kubelet stop
	I0207 19:48:13.107461  236648 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20220207194436-6868 sudo service kubelet stop]
	I0207 19:48:13.143425  236648 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container 710fd7fa598f32fe69c0d8fe72e0d21121ad230e6997ad139b53f6f3b52adfd6 is not running
	
	** /stderr **
	W0207 19:48:13.143449  236648 kic.go:445] couldn't force stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 710fd7fa598f32fe69c0d8fe72e0d21121ad230e6997ad139b53f6f3b52adfd6 is not running
	I0207 19:48:13.143542  236648 kic_runner.go:93] Run: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}
	I0207 19:48:13.143555  236648 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20220207194436-6868 docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}]
	I0207 19:48:13.178427  236648 kic.go:456] unable list containers : docker: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 710fd7fa598f32fe69c0d8fe72e0d21121ad230e6997ad139b53f6f3b52adfd6 is not running
	I0207 19:48:13.178461  236648 kic.go:466] successfully stopped kubernetes!
	I0207 19:48:13.178506  236648 kic_runner.go:93] Run: pgrep kube-apiserver
	I0207 19:48:13.178512  236648 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20220207194436-6868 pgrep kube-apiserver]
	I0207 19:48:13.249898  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:48:16.286479  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:48:19.328161  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:48:22.371879  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:48:25.410512  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:48:28.447348  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:48:31.498474  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:48:34.534497  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:48:37.570490  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:48:40.607225  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:48:43.661118  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:48:46.702504  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:48:49.736995  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:48:52.771795  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:48:55.814496  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:48:58.851870  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:49:01.896166  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:49:04.934502  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:49:07.973267  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:49:11.014509  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:49:14.050482  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:49:17.087724  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:49:20.125022  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:49:23.160120  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:49:26.198471  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:49:29.234046  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:49:32.278519  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:49:35.321146  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:49:38.362175  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:49:41.397089  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:49:44.431994  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:49:47.466527  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:49:50.504389  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:49:53.538487  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:49:56.574493  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:49:59.610491  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:50:02.646519  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:50:05.682466  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:50:08.721513  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:50:11.760292  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:50:14.796080  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:50:17.831837  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:50:20.866498  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:50:23.899881  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:50:26.934482  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:50:29.970496  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:50:33.008100  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:50:36.043567  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:50:39.078497  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:50:42.114706  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:50:45.152910  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:50:48.188750  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:50:51.226489  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:50:54.263331  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:50:57.299559  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:51:00.335793  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:51:03.373424  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:51:06.410638  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:51:09.447256  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:51:12.483581  236648 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:51:15.523504  236648 stop.go:59] stop err: Maximum number of retries (60) exceeded
	W0207 19:51:15.523567  236648 stop.go:163] stop host returned error: Temporary Error: stop: Maximum number of retries (60) exceeded
	I0207 19:51:15.526081  236648 out.go:176] 
	W0207 19:51:15.526233  236648 out.go:241] X Exiting due to GUEST_STOP_TIMEOUT: Temporary Error: stop: Maximum number of retries (60) exceeded
	X Exiting due to GUEST_STOP_TIMEOUT: Temporary Error: stop: Maximum number of retries (60) exceeded
	W0207 19:51:15.526245  236648 out.go:241] * 
	* 
	W0207 19:51:15.528325  236648 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0207 19:51:15.529788  236648 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-20220207194436-6868 --alsologtostderr -v=3" : exit status 82
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Stop]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20220207194436-6868
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20220207194436-6868:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "710fd7fa598f32fe69c0d8fe72e0d21121ad230e6997ad139b53f6f3b52adfd6",
	        "Created": "2022-02-07T19:48:05.877064608Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "created",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 128,
	            "Error": "network old-k8s-version-20220207194436-6868 not found",
	            "StartedAt": "0001-01-01T00:00:00Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:50384aa4ebef3abc81b3b83296147bd747dcd04d4644d8f3150476ffa93e6889",
	        "ResolvConfPath": "",
	        "HostnamePath": "",
	        "HostsPath": "",
	        "LogPath": "",
	        "Name": "/old-k8s-version-20220207194436-6868",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220207194436-6868:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220207194436-6868",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/087872f9edaf90ac3ade99ce6890e05c35f4c77e965f8e7184fb1ce06068554b-init/diff:/var/lib/docker/overlay2/40e36e3239cb5157195ce223d31e5e12299d283013c03c510d3e8a2442fd2c92/diff:/var/lib/docker/overlay2/21617b479acf17653e84d6ae3cb822db5c7eac887dbffb288d5171c45b712c0d/diff:/var/lib/docker/overlay2/2dbc01d4f6abd3524aaa75f3f362b44291e07e9adaadba323bd734a77bfa9c6a/diff:/var/lib/docker/overlay2/1c3968298265a3203685852a8c6fa391e12253b485741654087afb7a90fc1d77/diff:/var/lib/docker/overlay2/6a2a8c5d6504d982da53621a1d6f96ee3336c19fd9f294d5b418cc706dc8944c/diff:/var/lib/docker/overlay2/7e7a079457982ab93f984a944ffef8ef6a0aedcf9ae87dd48d2bfaebfa401212/diff:/var/lib/docker/overlay2/fae622e4af16ac53e0d1ab6e7ec0b23cddddaf4c7b9c906b18db9f5a7421f38d/diff:/var/lib/docker/overlay2/d4355831ba7c15624e8cc51f64415d91ec01d79fc16f0d8cce7cf9819963c9be/diff:/var/lib/docker/overlay2/5453a1a1be3960eaab33a3909934d20d3b1f1d0bd01d04e14158548e63d9ccc7/diff:/var/lib/docker/overlay2/b7f7aa
f98954a80aedd0a57753ced767fc40fd261655975f8bb2201f533af508/diff:/var/lib/docker/overlay2/582d45c1dfa23d0fcf227689ca05cc54f60cdf8562c7df098f15c0596f9f3b84/diff:/var/lib/docker/overlay2/97921dc2ea2a25724aa5bc8ee71d705ad02bb5de7327e9125b14e7ed3e0a36d9/diff:/var/lib/docker/overlay2/8994377961c9baa6fdb05a49604c2c1639c56f117040ce16cfcd7068142802d0/diff:/var/lib/docker/overlay2/741d31f19db93cecb47cf3edf12208c50adfa881f267e46fc2200168359e063e/diff:/var/lib/docker/overlay2/be1305b93735b2cb41c1050a14599a08f09c103ef39104313e5c6ea7783a25d0/diff:/var/lib/docker/overlay2/d2c6406a44063188bff06eacfb837bce43d713aa16c08f434607947a2e2aeb2d/diff:/var/lib/docker/overlay2/2354e37c2793df3a7faa18542aa5d3030952a40a0dd4361a9ad132d57efd3dea/diff:/var/lib/docker/overlay2/82b71b4192e75ce019792a62b12c4d48d3352cd8295673aa7b75c929d0c7f4ae/diff:/var/lib/docker/overlay2/6c62b320b27e5a2c13eea8d9b6e430fb56485a76ac7bf171136df923f96334b6/diff:/var/lib/docker/overlay2/f65c213239b185d01f445a11f073325d0aa4a30296ee7125aeec4abc8b80289e/diff:/var/lib/d
ocker/overlay2/f4ab87d7e9bbbf343135421546bd636317abbc0406bd09bc0e7ded9abb5ffe07/diff:/var/lib/docker/overlay2/c962dce8dce172c66b9fae4d0533e0b9eb6f537f99f2ae091522820f3437e87b/diff:/var/lib/docker/overlay2/c5f3b750eb1f675794758011aa1f3cf1afaaea6aeabaacfa7127c4e8eb3e9d3f/diff:/var/lib/docker/overlay2/165d7a930e1764d6612409e5b2abab0706c771e2ea6d53d26f379e5c8420b768/diff:/var/lib/docker/overlay2/c639594ead9cef5a157dcd6c5d3b58acfb87a1b54e09f09a89e5efe42a0250cb/diff:/var/lib/docker/overlay2/22d4ffdeda2486e79e77cdf6b2966c4e3f7a7c1d385f6914cf9abbbafd681fc5/diff:/var/lib/docker/overlay2/06347ddaa20c499bc26010d7a1ef1ac9c484d7088bac49bc47d017af272c5c8b/diff:/var/lib/docker/overlay2/4039a84be3e1b1c0c36b2bd5611308130efae8b5d3993d514489c326b58181a2/diff:/var/lib/docker/overlay2/00ba3d7351a8d15c1f38c8a5267ac7da1315950a1583dfe162bbe06e240d4e4e/diff:/var/lib/docker/overlay2/b66091d419eb3b0a03f2363973ab6750206d5cb1e33c6a80f22ac7b1b1c20015/diff:/var/lib/docker/overlay2/60a3c3f90313e57450868dd29163b9746391dbc376387ee61b371e7753d
2a9ed/diff:/var/lib/docker/overlay2/a4077b320de983a23a73f3509a3b65aa35c912b90e61cf3446d45334952197cc/diff:/var/lib/docker/overlay2/87466c009c98c77512f99106ac7b5b4682f6d57d0895993878a55843dfde4f0a/diff:/var/lib/docker/overlay2/be9cd77fbde8968efd17d63e6bf10bab9ae227bf6efd5ff15488effa8ed534f4/diff:/var/lib/docker/overlay2/692a8a7c4d738fb8caee425a6243fdaf5a5c4e7fdb6bda1969cba3c7099060d9/diff:/var/lib/docker/overlay2/90779bbe942cebdf0402a74acd25799917448b7948891aaf60636bbb4410e2d5/diff:/var/lib/docker/overlay2/f403aa656638a54017c9beeb448df9b3957711bbf52e5e92e279dd6a8e3a1a7b/diff:/var/lib/docker/overlay2/3e3a096efd54b9035c41e17e3c469d848ce1cddc9ad895ed288525a89e7d5153/diff:/var/lib/docker/overlay2/71a400a65bb51da094b9d5b672bf3e4973957a356b0480e8fd559aa527c64638/diff:/var/lib/docker/overlay2/5ecbee969df6610687807dc48c221a03964af0e197a0b8f0b5c38b70ab38cf4c/diff:/var/lib/docker/overlay2/1f806f3d9e1cd280380c82dd805cd7489ed4ed1d66b824ad880754d19b08dfa2/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/087872f9edaf90ac3ade99ce6890e05c35f4c77e965f8e7184fb1ce06068554b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/087872f9edaf90ac3ade99ce6890e05c35f4c77e965f8e7184fb1ce06068554b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/087872f9edaf90ac3ade99ce6890e05c35f4c77e965f8e7184fb1ce06068554b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220207194436-6868",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220207194436-6868/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220207194436-6868",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220207194436-6868",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220207194436-6868",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220207194436-6868": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220207194436-6868 -n old-k8s-version-20220207194436-6868
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220207194436-6868 -n old-k8s-version-20220207194436-6868: exit status 7 (104.237651ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "old-k8s-version-20220207194436-6868" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (182.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220207194436-6868 -n old-k8s-version-20220207194436-6868
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220207194436-6868 -n old-k8s-version-20220207194436-6868: exit status 7 (100.443636ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:226: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20220207194436-6868 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20220207194436-6868
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20220207194436-6868:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "710fd7fa598f32fe69c0d8fe72e0d21121ad230e6997ad139b53f6f3b52adfd6",
	        "Created": "2022-02-07T19:48:05.877064608Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "created",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 128,
	            "Error": "network old-k8s-version-20220207194436-6868 not found",
	            "StartedAt": "0001-01-01T00:00:00Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:50384aa4ebef3abc81b3b83296147bd747dcd04d4644d8f3150476ffa93e6889",
	        "ResolvConfPath": "",
	        "HostnamePath": "",
	        "HostsPath": "",
	        "LogPath": "",
	        "Name": "/old-k8s-version-20220207194436-6868",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220207194436-6868:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220207194436-6868",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/087872f9edaf90ac3ade99ce6890e05c35f4c77e965f8e7184fb1ce06068554b-init/diff:/var/lib/docker/overlay2/40e36e3239cb5157195ce223d31e5e12299d283013c03c510d3e8a2442fd2c92/diff:/var/lib/docker/overlay2/21617b479acf17653e84d6ae3cb822db5c7eac887dbffb288d5171c45b712c0d/diff:/var/lib/docker/overlay2/2dbc01d4f6abd3524aaa75f3f362b44291e07e9adaadba323bd734a77bfa9c6a/diff:/var/lib/docker/overlay2/1c3968298265a3203685852a8c6fa391e12253b485741654087afb7a90fc1d77/diff:/var/lib/docker/overlay2/6a2a8c5d6504d982da53621a1d6f96ee3336c19fd9f294d5b418cc706dc8944c/diff:/var/lib/docker/overlay2/7e7a079457982ab93f984a944ffef8ef6a0aedcf9ae87dd48d2bfaebfa401212/diff:/var/lib/docker/overlay2/fae622e4af16ac53e0d1ab6e7ec0b23cddddaf4c7b9c906b18db9f5a7421f38d/diff:/var/lib/docker/overlay2/d4355831ba7c15624e8cc51f64415d91ec01d79fc16f0d8cce7cf9819963c9be/diff:/var/lib/docker/overlay2/5453a1a1be3960eaab33a3909934d20d3b1f1d0bd01d04e14158548e63d9ccc7/diff:/var/lib/docker/overlay2/b7f7aa
f98954a80aedd0a57753ced767fc40fd261655975f8bb2201f533af508/diff:/var/lib/docker/overlay2/582d45c1dfa23d0fcf227689ca05cc54f60cdf8562c7df098f15c0596f9f3b84/diff:/var/lib/docker/overlay2/97921dc2ea2a25724aa5bc8ee71d705ad02bb5de7327e9125b14e7ed3e0a36d9/diff:/var/lib/docker/overlay2/8994377961c9baa6fdb05a49604c2c1639c56f117040ce16cfcd7068142802d0/diff:/var/lib/docker/overlay2/741d31f19db93cecb47cf3edf12208c50adfa881f267e46fc2200168359e063e/diff:/var/lib/docker/overlay2/be1305b93735b2cb41c1050a14599a08f09c103ef39104313e5c6ea7783a25d0/diff:/var/lib/docker/overlay2/d2c6406a44063188bff06eacfb837bce43d713aa16c08f434607947a2e2aeb2d/diff:/var/lib/docker/overlay2/2354e37c2793df3a7faa18542aa5d3030952a40a0dd4361a9ad132d57efd3dea/diff:/var/lib/docker/overlay2/82b71b4192e75ce019792a62b12c4d48d3352cd8295673aa7b75c929d0c7f4ae/diff:/var/lib/docker/overlay2/6c62b320b27e5a2c13eea8d9b6e430fb56485a76ac7bf171136df923f96334b6/diff:/var/lib/docker/overlay2/f65c213239b185d01f445a11f073325d0aa4a30296ee7125aeec4abc8b80289e/diff:/var/lib/d
ocker/overlay2/f4ab87d7e9bbbf343135421546bd636317abbc0406bd09bc0e7ded9abb5ffe07/diff:/var/lib/docker/overlay2/c962dce8dce172c66b9fae4d0533e0b9eb6f537f99f2ae091522820f3437e87b/diff:/var/lib/docker/overlay2/c5f3b750eb1f675794758011aa1f3cf1afaaea6aeabaacfa7127c4e8eb3e9d3f/diff:/var/lib/docker/overlay2/165d7a930e1764d6612409e5b2abab0706c771e2ea6d53d26f379e5c8420b768/diff:/var/lib/docker/overlay2/c639594ead9cef5a157dcd6c5d3b58acfb87a1b54e09f09a89e5efe42a0250cb/diff:/var/lib/docker/overlay2/22d4ffdeda2486e79e77cdf6b2966c4e3f7a7c1d385f6914cf9abbbafd681fc5/diff:/var/lib/docker/overlay2/06347ddaa20c499bc26010d7a1ef1ac9c484d7088bac49bc47d017af272c5c8b/diff:/var/lib/docker/overlay2/4039a84be3e1b1c0c36b2bd5611308130efae8b5d3993d514489c326b58181a2/diff:/var/lib/docker/overlay2/00ba3d7351a8d15c1f38c8a5267ac7da1315950a1583dfe162bbe06e240d4e4e/diff:/var/lib/docker/overlay2/b66091d419eb3b0a03f2363973ab6750206d5cb1e33c6a80f22ac7b1b1c20015/diff:/var/lib/docker/overlay2/60a3c3f90313e57450868dd29163b9746391dbc376387ee61b371e7753d
2a9ed/diff:/var/lib/docker/overlay2/a4077b320de983a23a73f3509a3b65aa35c912b90e61cf3446d45334952197cc/diff:/var/lib/docker/overlay2/87466c009c98c77512f99106ac7b5b4682f6d57d0895993878a55843dfde4f0a/diff:/var/lib/docker/overlay2/be9cd77fbde8968efd17d63e6bf10bab9ae227bf6efd5ff15488effa8ed534f4/diff:/var/lib/docker/overlay2/692a8a7c4d738fb8caee425a6243fdaf5a5c4e7fdb6bda1969cba3c7099060d9/diff:/var/lib/docker/overlay2/90779bbe942cebdf0402a74acd25799917448b7948891aaf60636bbb4410e2d5/diff:/var/lib/docker/overlay2/f403aa656638a54017c9beeb448df9b3957711bbf52e5e92e279dd6a8e3a1a7b/diff:/var/lib/docker/overlay2/3e3a096efd54b9035c41e17e3c469d848ce1cddc9ad895ed288525a89e7d5153/diff:/var/lib/docker/overlay2/71a400a65bb51da094b9d5b672bf3e4973957a356b0480e8fd559aa527c64638/diff:/var/lib/docker/overlay2/5ecbee969df6610687807dc48c221a03964af0e197a0b8f0b5c38b70ab38cf4c/diff:/var/lib/docker/overlay2/1f806f3d9e1cd280380c82dd805cd7489ed4ed1d66b824ad880754d19b08dfa2/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/087872f9edaf90ac3ade99ce6890e05c35f4c77e965f8e7184fb1ce06068554b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/087872f9edaf90ac3ade99ce6890e05c35f4c77e965f8e7184fb1ce06068554b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/087872f9edaf90ac3ade99ce6890e05c35f4c77e965f8e7184fb1ce06068554b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220207194436-6868",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220207194436-6868/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220207194436-6868",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220207194436-6868",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220207194436-6868",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220207194436-6868": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220207194436-6868 -n old-k8s-version-20220207194436-6868
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220207194436-6868 -n old-k8s-version-20220207194436-6868: exit status 7 (107.351113ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "old-k8s-version-20220207194436-6868" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (525.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20220207194241-6868 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-20220207194241-6868 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=docker: exit status 80 (8m45.109859275s)

                                                
                                                
-- stdout --
	* [calico-20220207194241-6868] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13439
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node calico-20220207194241-6868 in cluster calico-20220207194241-6868
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.3 on Docker 20.10.12 ...
	  - kubelet.housekeeping-interval=5m
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0207 19:55:21.752575  297158 out.go:297] Setting OutFile to fd 1 ...
	I0207 19:55:21.752668  297158 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:55:21.752672  297158 out.go:310] Setting ErrFile to fd 2...
	I0207 19:55:21.752676  297158 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:55:21.752791  297158 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/bin
	I0207 19:55:21.753071  297158 out.go:304] Setting JSON to false
	I0207 19:55:21.755720  297158 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":5878,"bootTime":1644257844,"procs":868,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0207 19:55:21.755855  297158 start.go:122] virtualization: kvm guest
	I0207 19:55:21.758997  297158 out.go:176] * [calico-20220207194241-6868] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	I0207 19:55:21.760664  297158 out.go:176]   - MINIKUBE_LOCATION=13439
	I0207 19:55:21.759259  297158 notify.go:174] Checking for updates...
	I0207 19:55:21.762207  297158 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0207 19:55:21.763905  297158 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/kubeconfig
	I0207 19:55:21.765405  297158 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube
	I0207 19:55:21.766890  297158 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0207 19:55:21.767611  297158 config.go:176] Loaded profile config "cilium-20220207194241-6868": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 19:55:21.767737  297158 config.go:176] Loaded profile config "false-20220207194241-6868": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 19:55:21.767826  297158 config.go:176] Loaded profile config "old-k8s-version-20220207194436-6868": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0207 19:55:21.767868  297158 driver.go:344] Setting default libvirt URI to qemu:///system
	I0207 19:55:21.815658  297158 docker.go:132] docker version: linux-20.10.12
	I0207 19:55:21.815774  297158 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 19:55:21.923552  297158 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:48 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-07 19:55:21.847701729 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663643648 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0207 19:55:21.923696  297158 docker.go:237] overlay module found
	I0207 19:55:21.926410  297158 out.go:176] * Using the docker driver based on user configuration
	I0207 19:55:21.926441  297158 start.go:281] selected driver: docker
	I0207 19:55:21.926449  297158 start.go:798] validating driver "docker" against <nil>
	I0207 19:55:21.926486  297158 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0207 19:55:21.926534  297158 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0207 19:55:21.926587  297158 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0207 19:55:21.927935  297158 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0207 19:55:21.928804  297158 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 19:55:22.041329  297158 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:48 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-07 19:55:21.966464034 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663643648 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0207 19:55:22.041556  297158 start_flags.go:288] no existing cluster config was found, will generate one from the flags 
	I0207 19:55:22.041785  297158 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	I0207 19:55:22.041827  297158 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0207 19:55:22.041857  297158 cni.go:93] Creating CNI manager for "calico"
	I0207 19:55:22.041866  297158 start_flags.go:297] Found "Calico" CNI - setting NetworkPlugin=cni
	I0207 19:55:22.041883  297158 start_flags.go:302] config:
	{Name:calico-20220207194241-6868 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:calico-20220207194241-6868 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0207 19:55:22.044338  297158 out.go:176] * Starting control plane node calico-20220207194241-6868 in cluster calico-20220207194241-6868
	I0207 19:55:22.044396  297158 cache.go:120] Beginning downloading kic base image for docker with docker
	I0207 19:55:22.046185  297158 out.go:176] * Pulling base image ...
	I0207 19:55:22.046228  297158 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 19:55:22.046267  297158 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4
	I0207 19:55:22.046295  297158 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon
	I0207 19:55:22.046298  297158 cache.go:57] Caching tarball of preloaded images
	I0207 19:55:22.046669  297158 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0207 19:55:22.046702  297158 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.3 on docker
	I0207 19:55:22.046866  297158 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/calico-20220207194241-6868/config.json ...
	I0207 19:55:22.046898  297158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/calico-20220207194241-6868/config.json: {Name:mka557ba84cba5782f21db8ec33788ff2b1ad75a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:55:22.089597  297158 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon, skipping pull
	I0207 19:55:22.089624  297158 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 exists in daemon, skipping load
	I0207 19:55:22.089638  297158 cache.go:208] Successfully downloaded all kic artifacts
	I0207 19:55:22.089674  297158 start.go:313] acquiring machines lock for calico-20220207194241-6868: {Name:mk1846de8046bee0253070129c9d3f6e56a42187 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0207 19:55:22.089826  297158 start.go:317] acquired machines lock for "calico-20220207194241-6868" in 129.655µs
	I0207 19:55:22.089859  297158 start.go:89] Provisioning new machine with config: &{Name:calico-20220207194241-6868 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:calico-20220207194241-6868 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0207 19:55:22.089953  297158 start.go:126] createHost starting for "" (driver="docker")
	I0207 19:55:22.092482  297158 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0207 19:55:22.092843  297158 start.go:160] libmachine.API.Create for "calico-20220207194241-6868" (driver="docker")
	I0207 19:55:22.092890  297158 client.go:168] LocalClient.Create starting
	I0207 19:55:22.093007  297158 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca.pem
	I0207 19:55:22.093059  297158 main.go:130] libmachine: Decoding PEM data...
	I0207 19:55:22.093105  297158 main.go:130] libmachine: Parsing certificate...
	I0207 19:55:22.093174  297158 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/cert.pem
	I0207 19:55:22.093199  297158 main.go:130] libmachine: Decoding PEM data...
	I0207 19:55:22.093217  297158 main.go:130] libmachine: Parsing certificate...
	I0207 19:55:22.093631  297158 cli_runner.go:133] Run: docker network inspect calico-20220207194241-6868 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0207 19:55:22.127558  297158 cli_runner.go:180] docker network inspect calico-20220207194241-6868 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0207 19:55:22.127617  297158 network_create.go:254] running [docker network inspect calico-20220207194241-6868] to gather additional debugging logs...
	I0207 19:55:22.127643  297158 cli_runner.go:133] Run: docker network inspect calico-20220207194241-6868
	W0207 19:55:22.162783  297158 cli_runner.go:180] docker network inspect calico-20220207194241-6868 returned with exit code 1
	I0207 19:55:22.162821  297158 network_create.go:257] error running [docker network inspect calico-20220207194241-6868]: docker network inspect calico-20220207194241-6868: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220207194241-6868
	I0207 19:55:22.162856  297158 network_create.go:259] output of [docker network inspect calico-20220207194241-6868]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220207194241-6868
	
	** /stderr **
	I0207 19:55:22.162907  297158 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0207 19:55:22.198787  297158 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-51a252b7bcae IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:0d:1f:dd:13}}
	I0207 19:55:22.199770  297158 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-d571b5f108cc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:65:54:b7:1b}}
	I0207 19:55:22.200793  297158 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc000132650] misses:0}
	I0207 19:55:22.200865  297158 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 19:55:22.200893  297158 network_create.go:106] attempt to create docker network calico-20220207194241-6868 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0207 19:55:22.200959  297158 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220207194241-6868
	I0207 19:55:22.287094  297158 network_create.go:90] docker network calico-20220207194241-6868 192.168.67.0/24 created
	I0207 19:55:22.287160  297158 kic.go:106] calculated static IP "192.168.67.2" for the "calico-20220207194241-6868" container
	I0207 19:55:22.287235  297158 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0207 19:55:22.329875  297158 cli_runner.go:133] Run: docker volume create calico-20220207194241-6868 --label name.minikube.sigs.k8s.io=calico-20220207194241-6868 --label created_by.minikube.sigs.k8s.io=true
	I0207 19:55:22.366479  297158 oci.go:102] Successfully created a docker volume calico-20220207194241-6868
	I0207 19:55:22.366568  297158 cli_runner.go:133] Run: docker run --rm --name calico-20220207194241-6868-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220207194241-6868 --entrypoint /usr/bin/test -v calico-20220207194241-6868:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -d /var/lib
	I0207 19:55:22.970294  297158 oci.go:106] Successfully prepared a docker volume calico-20220207194241-6868
	I0207 19:55:22.970384  297158 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 19:55:22.970408  297158 kic.go:179] Starting extracting preloaded images to volume ...
	I0207 19:55:22.970487  297158 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220207194241-6868:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0207 19:55:31.961241  297158 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220207194241-6868:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir: (8.990682029s)
	I0207 19:55:31.961294  297158 kic.go:188] duration metric: took 8.990883 seconds to extract preloaded images to volume
	W0207 19:55:31.961346  297158 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0207 19:55:31.961355  297158 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0207 19:55:31.961395  297158 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0207 19:55:32.112883  297158 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220207194241-6868 --name calico-20220207194241-6868 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220207194241-6868 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220207194241-6868 --network calico-20220207194241-6868 --ip 192.168.67.2 --volume calico-20220207194241-6868:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8
	I0207 19:55:32.920785  297158 cli_runner.go:133] Run: docker container inspect calico-20220207194241-6868 --format={{.State.Running}}
	I0207 19:55:32.981364  297158 cli_runner.go:133] Run: docker container inspect calico-20220207194241-6868 --format={{.State.Status}}
	I0207 19:55:33.028265  297158 cli_runner.go:133] Run: docker exec calico-20220207194241-6868 stat /var/lib/dpkg/alternatives/iptables
	I0207 19:55:33.126725  297158 oci.go:281] the created container "calico-20220207194241-6868" has a running status.
	I0207 19:55:33.126769  297158 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/calico-20220207194241-6868/id_rsa...
	I0207 19:55:33.581604  297158 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/calico-20220207194241-6868/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0207 19:55:33.672515  297158 cli_runner.go:133] Run: docker container inspect calico-20220207194241-6868 --format={{.State.Status}}
	I0207 19:55:33.710904  297158 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0207 19:55:33.710932  297158 kic_runner.go:114] Args: [docker exec --privileged calico-20220207194241-6868 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0207 19:55:33.788940  297158 cli_runner.go:133] Run: docker container inspect calico-20220207194241-6868 --format={{.State.Status}}
	I0207 19:55:33.829679  297158 machine.go:88] provisioning docker machine ...
	I0207 19:55:33.829724  297158 ubuntu.go:169] provisioning hostname "calico-20220207194241-6868"
	I0207 19:55:33.829774  297158 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220207194241-6868
	I0207 19:55:33.872528  297158 main.go:130] libmachine: Using SSH client type: native
	I0207 19:55:33.872766  297158 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil>  [] 0s} 127.0.0.1 49419 <nil> <nil>}
	I0207 19:55:33.872787  297158 main.go:130] libmachine: About to run SSH command:
	sudo hostname calico-20220207194241-6868 && echo "calico-20220207194241-6868" | sudo tee /etc/hostname
	I0207 19:55:34.013028  297158 main.go:130] libmachine: SSH cmd err, output: <nil>: calico-20220207194241-6868
	
	I0207 19:55:34.013150  297158 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220207194241-6868
	I0207 19:55:34.061713  297158 main.go:130] libmachine: Using SSH client type: native
	I0207 19:55:34.061856  297158 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil>  [] 0s} 127.0.0.1 49419 <nil> <nil>}
	I0207 19:55:34.061888  297158 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20220207194241-6868' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20220207194241-6868/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20220207194241-6868' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0207 19:55:34.182442  297158 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0207 19:55:34.182476  297158 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube}
	I0207 19:55:34.182501  297158 ubuntu.go:177] setting up certificates
	I0207 19:55:34.182510  297158 provision.go:83] configureAuth start
	I0207 19:55:34.182564  297158 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220207194241-6868
	I0207 19:55:34.216852  297158 provision.go:138] copyHostCerts
	I0207 19:55:34.216910  297158 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/key.pem, removing ...
	I0207 19:55:34.216921  297158 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/key.pem
	I0207 19:55:34.216983  297158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/key.pem (1675 bytes)
	I0207 19:55:34.217054  297158 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/ca.pem, removing ...
	I0207 19:55:34.217067  297158 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/ca.pem
	I0207 19:55:34.217090  297158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/ca.pem (1078 bytes)
	I0207 19:55:34.217151  297158 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cert.pem, removing ...
	I0207 19:55:34.217155  297158 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cert.pem
	I0207 19:55:34.217175  297158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cert.pem (1123 bytes)
	I0207 19:55:34.217222  297158 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca-key.pem org=jenkins.calico-20220207194241-6868 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20220207194241-6868]
	I0207 19:55:34.946641  297158 provision.go:172] copyRemoteCerts
	I0207 19:55:34.946717  297158 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0207 19:55:34.946797  297158 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220207194241-6868
	I0207 19:55:34.991317  297158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49419 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/calico-20220207194241-6868/id_rsa Username:docker}
	I0207 19:55:35.108377  297158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0207 19:55:35.130412  297158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0207 19:55:35.151793  297158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0207 19:55:35.178827  297158 provision.go:86] duration metric: configureAuth took 996.30094ms
	I0207 19:55:35.178858  297158 ubuntu.go:193] setting minikube options for container-runtime
	I0207 19:55:35.179062  297158 config.go:176] Loaded profile config "calico-20220207194241-6868": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 19:55:35.179130  297158 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220207194241-6868
	I0207 19:55:35.223441  297158 main.go:130] libmachine: Using SSH client type: native
	I0207 19:55:35.223597  297158 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil>  [] 0s} 127.0.0.1 49419 <nil> <nil>}
	I0207 19:55:35.223616  297158 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0207 19:55:35.348861  297158 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0207 19:55:35.348894  297158 ubuntu.go:71] root file system type: overlay
	I0207 19:55:35.349095  297158 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0207 19:55:35.349179  297158 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220207194241-6868
	I0207 19:55:35.393352  297158 main.go:130] libmachine: Using SSH client type: native
	I0207 19:55:35.393539  297158 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil>  [] 0s} 127.0.0.1 49419 <nil> <nil>}
	I0207 19:55:35.393640  297158 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0207 19:55:35.536012  297158 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0207 19:55:35.536091  297158 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220207194241-6868
	I0207 19:55:35.583380  297158 main.go:130] libmachine: Using SSH client type: native
	I0207 19:55:35.583578  297158 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil>  [] 0s} 127.0.0.1 49419 <nil> <nil>}
	I0207 19:55:35.583611  297158 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0207 19:55:36.421445  297158 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-12-13 11:43:42.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-02-07 19:55:35.532904228 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0207 19:55:36.421483  297158 machine.go:91] provisioned docker machine in 2.591775202s
	I0207 19:55:36.421494  297158 client.go:171] LocalClient.Create took 14.328593574s
	I0207 19:55:36.421504  297158 start.go:168] duration metric: libmachine.API.Create for "calico-20220207194241-6868" took 14.328666311s
	I0207 19:55:36.421512  297158 start.go:267] post-start starting for "calico-20220207194241-6868" (driver="docker")
	I0207 19:55:36.421518  297158 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0207 19:55:36.421583  297158 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0207 19:55:36.421623  297158 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220207194241-6868
	I0207 19:55:36.484423  297158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49419 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/calico-20220207194241-6868/id_rsa Username:docker}
	I0207 19:55:36.581135  297158 ssh_runner.go:195] Run: cat /etc/os-release
	I0207 19:55:36.584484  297158 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0207 19:55:36.584518  297158 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0207 19:55:36.584533  297158 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0207 19:55:36.584540  297158 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0207 19:55:36.584552  297158 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/addons for local assets ...
	I0207 19:55:36.584647  297158 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/files for local assets ...
	I0207 19:55:36.584757  297158 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/files/etc/ssl/certs/68682.pem -> 68682.pem in /etc/ssl/certs
	I0207 19:55:36.584862  297158 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0207 19:55:36.592978  297158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/files/etc/ssl/certs/68682.pem --> /etc/ssl/certs/68682.pem (1708 bytes)
	I0207 19:55:36.615822  297158 start.go:270] post-start completed in 194.295446ms
	I0207 19:55:36.616303  297158 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220207194241-6868
	I0207 19:55:36.669600  297158 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/calico-20220207194241-6868/config.json ...
	I0207 19:55:36.669903  297158 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0207 19:55:36.669975  297158 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220207194241-6868
	I0207 19:55:36.709724  297158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49419 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/calico-20220207194241-6868/id_rsa Username:docker}
	I0207 19:55:36.802926  297158 start.go:129] duration metric: createHost completed in 14.712959148s
	I0207 19:55:36.802961  297158 start.go:80] releasing machines lock for "calico-20220207194241-6868", held for 14.713121607s
	I0207 19:55:36.803057  297158 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220207194241-6868
	I0207 19:55:36.838407  297158 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0207 19:55:36.838472  297158 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220207194241-6868
	I0207 19:55:36.838706  297158 ssh_runner.go:195] Run: systemctl --version
	I0207 19:55:36.838748  297158 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220207194241-6868
	I0207 19:55:36.894842  297158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49419 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/calico-20220207194241-6868/id_rsa Username:docker}
	I0207 19:55:36.895125  297158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49419 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/calico-20220207194241-6868/id_rsa Username:docker}
	I0207 19:55:37.012189  297158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0207 19:55:37.024709  297158 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0207 19:55:37.035229  297158 cruntime.go:272] skipping containerd shutdown because we are bound to it
	I0207 19:55:37.035296  297158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0207 19:55:37.054837  297158 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0207 19:55:37.078629  297158 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0207 19:55:37.190964  297158 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0207 19:55:37.329746  297158 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0207 19:55:37.341669  297158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0207 19:55:37.435256  297158 ssh_runner.go:195] Run: sudo systemctl start docker
	I0207 19:55:37.450029  297158 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0207 19:55:37.505691  297158 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0207 19:55:37.554520  297158 out.go:203] * Preparing Kubernetes v1.23.3 on Docker 20.10.12 ...
	I0207 19:55:37.554625  297158 cli_runner.go:133] Run: docker network inspect calico-20220207194241-6868 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0207 19:55:37.599445  297158 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0207 19:55:37.603211  297158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0207 19:55:37.615795  297158 out.go:176]   - kubelet.housekeeping-interval=5m
	I0207 19:55:37.615874  297158 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 19:55:37.615956  297158 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0207 19:55:37.657311  297158 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.3
	k8s.gcr.io/kube-scheduler:v1.23.3
	k8s.gcr.io/kube-proxy:v1.23.3
	k8s.gcr.io/kube-controller-manager:v1.23.3
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0207 19:55:37.657338  297158 docker.go:537] Images already preloaded, skipping extraction
	I0207 19:55:37.657401  297158 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0207 19:55:37.698366  297158 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.3
	k8s.gcr.io/kube-proxy:v1.23.3
	k8s.gcr.io/kube-controller-manager:v1.23.3
	k8s.gcr.io/kube-scheduler:v1.23.3
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0207 19:55:37.698395  297158 cache_images.go:84] Images are preloaded, skipping loading
	I0207 19:55:37.698475  297158 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0207 19:55:37.803651  297158 cni.go:93] Creating CNI manager for "calico"
	I0207 19:55:37.803687  297158 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0207 19:55:37.803712  297158 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20220207194241-6868 NodeName:calico-20220207194241-6868 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0207 19:55:37.803871  297158 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "calico-20220207194241-6868"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0207 19:55:37.804023  297158 kubeadm.go:935] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=calico-20220207194241-6868 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.3 ClusterName:calico-20220207194241-6868 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0207 19:55:37.804084  297158 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.3
	I0207 19:55:37.812170  297158 binaries.go:44] Found k8s binaries, skipping transfer
	I0207 19:55:37.812234  297158 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0207 19:55:37.820643  297158 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (400 bytes)
	I0207 19:55:37.835264  297158 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0207 19:55:37.853890  297158 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
	I0207 19:55:37.870434  297158 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0207 19:55:37.875004  297158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0207 19:55:37.888721  297158 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/calico-20220207194241-6868 for IP: 192.168.67.2
	I0207 19:55:37.888848  297158 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/ca.key
	I0207 19:55:37.888904  297158 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/proxy-client-ca.key
	I0207 19:55:37.888961  297158 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/calico-20220207194241-6868/client.key
	I0207 19:55:37.888976  297158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/calico-20220207194241-6868/client.crt with IP's: []
	I0207 19:55:38.303198  297158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/calico-20220207194241-6868/client.crt ...
	I0207 19:55:38.303235  297158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/calico-20220207194241-6868/client.crt: {Name:mk373a6786f8e0bcfdb686618d3abd8cd904c742 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:55:38.303471  297158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/calico-20220207194241-6868/client.key ...
	I0207 19:55:38.303489  297158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/calico-20220207194241-6868/client.key: {Name:mkf450a93a9834a114b8655ed330e1c497816a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:55:38.303611  297158 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/calico-20220207194241-6868/apiserver.key.c7fa3a9e
	I0207 19:55:38.303636  297158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/calico-20220207194241-6868/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0207 19:55:38.412473  297158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/calico-20220207194241-6868/apiserver.crt.c7fa3a9e ...
	I0207 19:55:38.412511  297158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/calico-20220207194241-6868/apiserver.crt.c7fa3a9e: {Name:mkedda9d4d6daf85d922d36752def4bda513f7e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:55:38.412741  297158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/calico-20220207194241-6868/apiserver.key.c7fa3a9e ...
	I0207 19:55:38.412759  297158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/calico-20220207194241-6868/apiserver.key.c7fa3a9e: {Name:mke315bbcc81646f5d10170e68dc4b300fb851f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:55:38.412878  297158 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/calico-20220207194241-6868/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/calico-20220207194241-6868/apiserver.crt
	I0207 19:55:38.412953  297158 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/calico-20220207194241-6868/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/calico-20220207194241-6868/apiserver.key
	I0207 19:55:38.413015  297158 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/calico-20220207194241-6868/proxy-client.key
	I0207 19:55:38.413037  297158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/calico-20220207194241-6868/proxy-client.crt with IP's: []
	I0207 19:55:38.615361  297158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/calico-20220207194241-6868/proxy-client.crt ...
	I0207 19:55:38.615395  297158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/calico-20220207194241-6868/proxy-client.crt: {Name:mk3cb697a6b4a2750eb47f0f5b2b73f012fd07e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:55:38.615598  297158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/calico-20220207194241-6868/proxy-client.key ...
	I0207 19:55:38.615612  297158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/calico-20220207194241-6868/proxy-client.key: {Name:mke13434ec2190c253b18aef366fa02fc042db4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:55:38.615782  297158 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/6868.pem (1338 bytes)
	W0207 19:55:38.615820  297158 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/6868_empty.pem, impossibly tiny 0 bytes
	I0207 19:55:38.615832  297158 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca-key.pem (1675 bytes)
	I0207 19:55:38.615857  297158 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca.pem (1078 bytes)
	I0207 19:55:38.615884  297158 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/cert.pem (1123 bytes)
	I0207 19:55:38.615907  297158 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/key.pem (1675 bytes)
	I0207 19:55:38.615970  297158 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/files/etc/ssl/certs/68682.pem (1708 bytes)
	I0207 19:55:38.616840  297158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/calico-20220207194241-6868/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0207 19:55:38.636257  297158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/calico-20220207194241-6868/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0207 19:55:38.673375  297158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/calico-20220207194241-6868/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0207 19:55:38.701057  297158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/calico-20220207194241-6868/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0207 19:55:38.721285  297158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0207 19:55:38.744477  297158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0207 19:55:38.780929  297158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0207 19:55:38.807133  297158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0207 19:55:38.827075  297158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0207 19:55:38.853153  297158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/6868.pem --> /usr/share/ca-certificates/6868.pem (1338 bytes)
	I0207 19:55:38.879782  297158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/files/etc/ssl/certs/68682.pem --> /usr/share/ca-certificates/68682.pem (1708 bytes)
	I0207 19:55:38.907226  297158 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0207 19:55:38.923987  297158 ssh_runner.go:195] Run: openssl version
	I0207 19:55:38.929285  297158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0207 19:55:38.937575  297158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0207 19:55:38.941777  297158 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb  7 19:17 /usr/share/ca-certificates/minikubeCA.pem
	I0207 19:55:38.941837  297158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0207 19:55:38.948690  297158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0207 19:55:38.963387  297158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6868.pem && ln -fs /usr/share/ca-certificates/6868.pem /etc/ssl/certs/6868.pem"
	I0207 19:55:38.975265  297158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6868.pem
	I0207 19:55:38.979684  297158 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb  7 19:21 /usr/share/ca-certificates/6868.pem
	I0207 19:55:38.979752  297158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6868.pem
	I0207 19:55:38.987062  297158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6868.pem /etc/ssl/certs/51391683.0"
	I0207 19:55:38.996956  297158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/68682.pem && ln -fs /usr/share/ca-certificates/68682.pem /etc/ssl/certs/68682.pem"
	I0207 19:55:39.005629  297158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/68682.pem
	I0207 19:55:39.009337  297158 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb  7 19:21 /usr/share/ca-certificates/68682.pem
	I0207 19:55:39.009400  297158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/68682.pem
	I0207 19:55:39.015251  297158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/68682.pem /etc/ssl/certs/3ec20f2e.0"
	I0207 19:55:39.023861  297158 kubeadm.go:390] StartCluster: {Name:calico-20220207194241-6868 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:calico-20220207194241-6868 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false}
	I0207 19:55:39.023983  297158 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0207 19:55:39.080630  297158 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0207 19:55:39.090850  297158 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0207 19:55:39.103251  297158 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0207 19:55:39.103322  297158 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0207 19:55:39.112222  297158 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0207 19:55:39.112279  297158 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0207 19:55:39.873438  297158 out.go:203]   - Generating certificates and keys ...
	I0207 19:55:43.063899  297158 out.go:203]   - Booting up control plane ...
	I0207 19:55:51.615143  297158 out.go:203]   - Configuring RBAC rules ...
	I0207 19:55:52.031115  297158 cni.go:93] Creating CNI manager for "calico"
	I0207 19:55:52.033670  297158 out.go:176] * Configuring Calico (Container Networking Interface) ...
	I0207 19:55:52.033931  297158 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.3/kubectl ...
	I0207 19:55:52.033955  297158 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202049 bytes)
	I0207 19:55:52.051399  297158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0207 19:55:53.799070  297158 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.747628605s)
	I0207 19:55:53.799135  297158 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0207 19:55:53.799227  297158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:55:53.799227  297158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=68b41900649d825bc98a620f335c8941b16741bb minikube.k8s.io/name=calico-20220207194241-6868 minikube.k8s.io/updated_at=2022_02_07T19_55_53_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:55:53.853502  297158 ops.go:34] apiserver oom_adj: -16
	I0207 19:55:53.947243  297158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:55:54.548748  297158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:55:55.048330  297158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:55:55.548292  297158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:55:56.049241  297158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:55:56.548949  297158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:55:57.048467  297158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:55:57.548454  297158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:55:58.048604  297158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:55:58.548319  297158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:55:59.048526  297158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:55:59.548325  297158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:00.048988  297158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:00.549135  297158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:01.048897  297158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:01.549202  297158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:02.049135  297158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:02.548890  297158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:03.048512  297158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:03.548315  297158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:04.049037  297158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:04.548864  297158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:05.048455  297158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:05.548635  297158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:06.048412  297158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:06.112943  297158 kubeadm.go:1019] duration metric: took 12.313770267s to wait for elevateKubeSystemPrivileges.
	I0207 19:56:06.112980  297158 kubeadm.go:392] StartCluster complete in 27.089145613s
	I0207 19:56:06.113002  297158 settings.go:142] acquiring lock: {Name:mk7529dd3428fdf27408cc6b278cb5c7b03413f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:56:06.113102  297158 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/kubeconfig
	I0207 19:56:06.114647  297158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/kubeconfig: {Name:mkd7bc53058a925fccbecd7920bc22204f3abc89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:56:06.636343  297158 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20220207194241-6868" rescaled to 1
	I0207 19:56:06.636412  297158 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0207 19:56:06.638672  297158 out.go:176] * Verifying Kubernetes components...
	I0207 19:56:06.638738  297158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0207 19:56:06.636475  297158 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0207 19:56:06.638811  297158 addons.go:65] Setting storage-provisioner=true in profile "calico-20220207194241-6868"
	I0207 19:56:06.638843  297158 addons.go:153] Setting addon storage-provisioner=true in "calico-20220207194241-6868"
	I0207 19:56:06.636427  297158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	W0207 19:56:06.638854  297158 addons.go:165] addon storage-provisioner should already be in state true
	I0207 19:56:06.638858  297158 addons.go:65] Setting default-storageclass=true in profile "calico-20220207194241-6868"
	I0207 19:56:06.638874  297158 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20220207194241-6868"
	I0207 19:56:06.638892  297158 host.go:66] Checking if "calico-20220207194241-6868" exists ...
	I0207 19:56:06.636605  297158 config.go:176] Loaded profile config "calico-20220207194241-6868": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 19:56:06.639263  297158 cli_runner.go:133] Run: docker container inspect calico-20220207194241-6868 --format={{.State.Status}}
	I0207 19:56:06.639449  297158 cli_runner.go:133] Run: docker container inspect calico-20220207194241-6868 --format={{.State.Status}}
	I0207 19:56:06.688494  297158 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0207 19:56:06.688667  297158 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0207 19:56:06.688687  297158 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0207 19:56:06.688773  297158 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220207194241-6868
	I0207 19:56:06.691473  297158 addons.go:153] Setting addon default-storageclass=true in "calico-20220207194241-6868"
	W0207 19:56:06.691497  297158 addons.go:165] addon default-storageclass should already be in state true
	I0207 19:56:06.691523  297158 host.go:66] Checking if "calico-20220207194241-6868" exists ...
	I0207 19:56:06.691956  297158 cli_runner.go:133] Run: docker container inspect calico-20220207194241-6868 --format={{.State.Status}}
	I0207 19:56:06.719304  297158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0207 19:56:06.721251  297158 node_ready.go:35] waiting up to 5m0s for node "calico-20220207194241-6868" to be "Ready" ...
	I0207 19:56:06.726909  297158 node_ready.go:49] node "calico-20220207194241-6868" has status "Ready":"True"
	I0207 19:56:06.726945  297158 node_ready.go:38] duration metric: took 5.664748ms waiting for node "calico-20220207194241-6868" to be "Ready" ...
	I0207 19:56:06.726958  297158 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0207 19:56:06.740260  297158 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0207 19:56:06.740286  297158 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0207 19:56:06.740344  297158 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220207194241-6868
	I0207 19:56:06.744489  297158 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace to be "Ready" ...
	I0207 19:56:06.745077  297158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49419 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/calico-20220207194241-6868/id_rsa Username:docker}
	I0207 19:56:06.788912  297158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49419 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/calico-20220207194241-6868/id_rsa Username:docker}
	I0207 19:56:06.855338  297158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0207 19:56:06.961349  297158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0207 19:56:08.764431  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:08.964679  297158 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.245329121s)
	I0207 19:56:08.964712  297158 start.go:777] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0207 19:56:09.078430  297158 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.223051141s)
	I0207 19:56:09.078510  297158 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.117130584s)
	I0207 19:56:09.080702  297158 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0207 19:56:09.080730  297158 addons.go:417] enableAddons completed in 2.444267921s
	I0207 19:56:11.263488  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:13.263697  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:15.264194  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:17.763602  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:19.764966  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:21.767974  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:24.267776  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:26.764588  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:29.264775  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:31.764321  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:34.264025  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:36.264442  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:38.267318  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:40.829469  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:43.263668  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:45.264086  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:47.766186  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:50.264554  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:52.268252  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:54.269869  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:56.275727  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:58.765722  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:00.838608  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:03.263494  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:05.266208  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:07.763634  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:10.265019  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:12.763164  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:15.264998  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:17.764533  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:19.765218  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:22.265312  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:24.764272  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:26.764889  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:28.767649  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:31.263150  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:33.264275  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:35.264550  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:37.265177  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:39.770306  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:42.263299  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:44.263968  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:46.269283  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:48.764646  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:51.263780  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:53.264795  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:55.765339  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:58.264915  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:00.763741  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:02.765045  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:05.263759  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:07.765273  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:10.264028  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:12.264522  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:14.270825  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:16.763314  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:18.764343  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:21.263998  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:23.764370  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:26.264910  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:28.763253  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:30.763489  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:32.764419  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:35.263183  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:37.263474  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:39.264578  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:41.764222  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:43.764572  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:46.264406  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:48.264960  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:50.763927  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:52.764028  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:55.264640  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:57.264701  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:59.762995  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:01.763781  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:04.272581  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:06.763396  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:09.264559  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:11.764103  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:13.768667  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:16.263608  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:18.765204  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:21.263592  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:23.263982  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:25.768927  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:28.263958  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:30.763640  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:32.763864  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:35.264140  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:37.264548  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:39.764142  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:41.764721  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:43.764858  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:46.266852  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:48.763737  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:51.262923  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:53.263850  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:55.762435  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:57.763584  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:00.263547  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:02.763885  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:05.263326  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:06.767288  297158 pod_ready.go:81] duration metric: took 4m0.022695564s waiting for pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace to be "Ready" ...
	E0207 20:00:06.767318  297158 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0207 20:00:06.767333  297158 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-wqztk" in "kube-system" namespace to be "Ready" ...
	I0207 20:00:08.779603  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:11.279580  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:13.778924  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:16.278574  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:18.283375  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:20.779655  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:22.781529  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:25.278795  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:27.279304  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:29.778979  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:32.278875  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:34.280750  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:36.779019  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:38.780105  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:41.278663  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:43.279139  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:45.778457  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:47.779962  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:49.780288  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:51.780480  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:54.279956  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:56.780342  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:58.838105  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:01.280274  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:03.283383  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:05.779742  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:08.279557  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:10.279629  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:12.779828  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:14.780180  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:17.278434  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:19.279196  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:21.279799  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:23.778491  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:25.778775  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:27.779844  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:30.280712  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:32.779734  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:34.781239  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:37.280102  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:39.280864  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:41.780223  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:43.780860  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:45.796780  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:48.280722  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:50.780321  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:53.280206  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:55.281046  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:57.780196  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:00.281218  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:02.285498  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:04.781825  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:07.280459  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:09.779850  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:11.780314  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:13.781832  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:16.278712  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:18.279676  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:20.281142  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:22.284873  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:24.779403  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:26.780172  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:29.280117  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:31.280595  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:33.781033  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:36.279370  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:38.280253  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:40.778626  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:42.780187  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:44.780247  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:46.780494  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:49.279837  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:51.289185  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:53.780376  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:56.280648  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:58.779569  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:00.780012  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:02.780513  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:05.279247  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:07.280087  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:09.779640  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:12.279284  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:14.280907  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:16.779068  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:18.779530  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:20.780124  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:22.780518  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:25.279629  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:27.779732  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:29.780391  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:31.781195  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:34.278695  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:36.279606  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:38.283175  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:40.781571  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:43.279748  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:45.779355  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:47.780926  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:50.278720  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:52.281421  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:54.779903  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:57.279876  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:59.780045  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:04:01.780808  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:04:04.282916  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:04:06.783166  297158 pod_ready.go:102] pod "calico-node-wqztk" in "kube-system" namespace has status "Ready":"False"
	I0207 20:04:06.790091  297158 pod_ready.go:81] duration metric: took 4m0.02272833s waiting for pod "calico-node-wqztk" in "kube-system" namespace to be "Ready" ...
	E0207 20:04:06.790118  297158 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0207 20:04:06.790135  297158 pod_ready.go:38] duration metric: took 8m0.063164122s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0207 20:04:06.792432  297158 out.go:176] 
	W0207 20:04:06.792570  297158 out.go:241] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0207 20:04:06.792591  297158 out.go:241] * 
	* 
	W0207 20:04:06.793576  297158 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0207 20:04:06.795554  297158 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (525.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (525.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p custom-weave-20220207194241-6868 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p custom-weave-20220207194241-6868 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=docker: exit status 105 (8m45.51778874s)

                                                
                                                
-- stdout --
	* [custom-weave-20220207194241-6868] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13439
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node custom-weave-20220207194241-6868 in cluster custom-weave-20220207194241-6868
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.3 on Docker 20.10.12 ...
	  - kubelet.housekeeping-interval=5m
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring testdata/weavenet.yaml (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0207 19:55:57.069879  305415 out.go:297] Setting OutFile to fd 1 ...
	I0207 19:55:57.069965  305415 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:55:57.069969  305415 out.go:310] Setting ErrFile to fd 2...
	I0207 19:55:57.069974  305415 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:55:57.070068  305415 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/bin
	I0207 19:55:57.070401  305415 out.go:304] Setting JSON to false
	I0207 19:55:57.072354  305415 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":5913,"bootTime":1644257844,"procs":895,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0207 19:55:57.072460  305415 start.go:122] virtualization: kvm guest
	I0207 19:55:57.075422  305415 out.go:176] * [custom-weave-20220207194241-6868] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	I0207 19:55:57.076998  305415 out.go:176]   - MINIKUBE_LOCATION=13439
	I0207 19:55:57.075621  305415 notify.go:174] Checking for updates...
	I0207 19:55:57.078517  305415 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0207 19:55:57.079936  305415 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/kubeconfig
	I0207 19:55:57.081445  305415 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube
	I0207 19:55:57.082939  305415 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0207 19:55:57.083477  305415 config.go:176] Loaded profile config "calico-20220207194241-6868": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 19:55:57.083594  305415 config.go:176] Loaded profile config "cilium-20220207194241-6868": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 19:55:57.083735  305415 config.go:176] Loaded profile config "old-k8s-version-20220207194436-6868": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0207 19:55:57.083794  305415 driver.go:344] Setting default libvirt URI to qemu:///system
	I0207 19:55:57.142223  305415 docker.go:132] docker version: linux-20.10.12
	I0207 19:55:57.142318  305415 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 19:55:57.250225  305415 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:48 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-07 19:55:57.176273588 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663643648 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0207 19:55:57.250396  305415 docker.go:237] overlay module found
	I0207 19:55:57.252870  305415 out.go:176] * Using the docker driver based on user configuration
	I0207 19:55:57.252900  305415 start.go:281] selected driver: docker
	I0207 19:55:57.252906  305415 start.go:798] validating driver "docker" against <nil>
	I0207 19:55:57.252925  305415 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0207 19:55:57.252980  305415 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0207 19:55:57.253001  305415 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0207 19:55:57.254427  305415 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0207 19:55:57.255049  305415 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 19:55:57.356879  305415 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:48 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-07 19:55:57.287328364 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663643648 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0207 19:55:57.357019  305415 start_flags.go:288] no existing cluster config was found, will generate one from the flags 
	I0207 19:55:57.357183  305415 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	I0207 19:55:57.357210  305415 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0207 19:55:57.357232  305415 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0207 19:55:57.357251  305415 start_flags.go:297] Found "testdata/weavenet.yaml" CNI - setting NetworkPlugin=cni
	I0207 19:55:57.357265  305415 start_flags.go:302] config:
	{Name:custom-weave-20220207194241-6868 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:custom-weave-20220207194241-6868 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0207 19:55:57.359791  305415 out.go:176] * Starting control plane node custom-weave-20220207194241-6868 in cluster custom-weave-20220207194241-6868
	I0207 19:55:57.359834  305415 cache.go:120] Beginning downloading kic base image for docker with docker
	I0207 19:55:57.361247  305415 out.go:176] * Pulling base image ...
	I0207 19:55:57.361270  305415 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 19:55:57.361299  305415 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4
	I0207 19:55:57.361310  305415 cache.go:57] Caching tarball of preloaded images
	I0207 19:55:57.361374  305415 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon
	I0207 19:55:57.361589  305415 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0207 19:55:57.361612  305415 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.3 on docker
	I0207 19:55:57.361821  305415 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/custom-weave-20220207194241-6868/config.json ...
	I0207 19:55:57.361856  305415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/custom-weave-20220207194241-6868/config.json: {Name:mk9233842781d1aa0f2e44a319e6471dc9090d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:55:57.408193  305415 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon, skipping pull
	I0207 19:55:57.408223  305415 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 exists in daemon, skipping load
	I0207 19:55:57.408246  305415 cache.go:208] Successfully downloaded all kic artifacts
	I0207 19:55:57.408285  305415 start.go:313] acquiring machines lock for custom-weave-20220207194241-6868: {Name:mkeb95a0c8f7c9d4ddf8cd234cf238ddfb810808 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0207 19:55:57.408434  305415 start.go:317] acquired machines lock for "custom-weave-20220207194241-6868" in 128.658µs
	I0207 19:55:57.408466  305415 start.go:89] Provisioning new machine with config: &{Name:custom-weave-20220207194241-6868 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:custom-weave-20220207194241-6868 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0207 19:55:57.408561  305415 start.go:126] createHost starting for "" (driver="docker")
	I0207 19:55:57.411213  305415 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0207 19:55:57.411479  305415 start.go:160] libmachine.API.Create for "custom-weave-20220207194241-6868" (driver="docker")
	I0207 19:55:57.411531  305415 client.go:168] LocalClient.Create starting
	I0207 19:55:57.411644  305415 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca.pem
	I0207 19:55:57.411693  305415 main.go:130] libmachine: Decoding PEM data...
	I0207 19:55:57.411715  305415 main.go:130] libmachine: Parsing certificate...
	I0207 19:55:57.411805  305415 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/cert.pem
	I0207 19:55:57.411833  305415 main.go:130] libmachine: Decoding PEM data...
	I0207 19:55:57.411859  305415 main.go:130] libmachine: Parsing certificate...
	I0207 19:55:57.412297  305415 cli_runner.go:133] Run: docker network inspect custom-weave-20220207194241-6868 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0207 19:55:57.447839  305415 cli_runner.go:180] docker network inspect custom-weave-20220207194241-6868 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0207 19:55:57.447912  305415 network_create.go:254] running [docker network inspect custom-weave-20220207194241-6868] to gather additional debugging logs...
	I0207 19:55:57.447933  305415 cli_runner.go:133] Run: docker network inspect custom-weave-20220207194241-6868
	W0207 19:55:57.486235  305415 cli_runner.go:180] docker network inspect custom-weave-20220207194241-6868 returned with exit code 1
	I0207 19:55:57.486270  305415 network_create.go:257] error running [docker network inspect custom-weave-20220207194241-6868]: docker network inspect custom-weave-20220207194241-6868: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: custom-weave-20220207194241-6868
	I0207 19:55:57.486294  305415 network_create.go:259] output of [docker network inspect custom-weave-20220207194241-6868]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: custom-weave-20220207194241-6868
	
	** /stderr **
	I0207 19:55:57.486386  305415 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0207 19:55:57.524840  305415 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000114090] misses:0}
	I0207 19:55:57.524899  305415 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 19:55:57.524922  305415 network_create.go:106] attempt to create docker network custom-weave-20220207194241-6868 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0207 19:55:57.524969  305415 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220207194241-6868
	I0207 19:55:57.612773  305415 network_create.go:90] docker network custom-weave-20220207194241-6868 192.168.49.0/24 created
	I0207 19:55:57.612821  305415 kic.go:106] calculated static IP "192.168.49.2" for the "custom-weave-20220207194241-6868" container
	I0207 19:55:57.612889  305415 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0207 19:55:57.651371  305415 cli_runner.go:133] Run: docker volume create custom-weave-20220207194241-6868 --label name.minikube.sigs.k8s.io=custom-weave-20220207194241-6868 --label created_by.minikube.sigs.k8s.io=true
	I0207 19:55:57.690173  305415 oci.go:102] Successfully created a docker volume custom-weave-20220207194241-6868
	I0207 19:55:57.690277  305415 cli_runner.go:133] Run: docker run --rm --name custom-weave-20220207194241-6868-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220207194241-6868 --entrypoint /usr/bin/test -v custom-weave-20220207194241-6868:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -d /var/lib
	I0207 19:55:58.285197  305415 oci.go:106] Successfully prepared a docker volume custom-weave-20220207194241-6868
	I0207 19:55:58.285265  305415 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 19:55:58.285283  305415 kic.go:179] Starting extracting preloaded images to volume ...
	I0207 19:55:58.285339  305415 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20220207194241-6868:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0207 19:56:04.162263  305415 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20220207194241-6868:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.876850748s)
	I0207 19:56:04.162324  305415 kic.go:188] duration metric: took 5.877038 seconds to extract preloaded images to volume
	W0207 19:56:04.162419  305415 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0207 19:56:04.162438  305415 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0207 19:56:04.162516  305415 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0207 19:56:04.315172  305415 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20220207194241-6868 --name custom-weave-20220207194241-6868 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220207194241-6868 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20220207194241-6868 --network custom-weave-20220207194241-6868 --ip 192.168.49.2 --volume custom-weave-20220207194241-6868:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8
	I0207 19:56:04.825502  305415 cli_runner.go:133] Run: docker container inspect custom-weave-20220207194241-6868 --format={{.State.Running}}
	I0207 19:56:04.898179  305415 cli_runner.go:133] Run: docker container inspect custom-weave-20220207194241-6868 --format={{.State.Status}}
	I0207 19:56:04.934542  305415 cli_runner.go:133] Run: docker exec custom-weave-20220207194241-6868 stat /var/lib/dpkg/alternatives/iptables
	I0207 19:56:05.025401  305415 oci.go:281] the created container "custom-weave-20220207194241-6868" has a running status.
	I0207 19:56:05.025437  305415 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/custom-weave-20220207194241-6868/id_rsa...
	I0207 19:56:05.266669  305415 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/custom-weave-20220207194241-6868/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0207 19:56:05.377355  305415 cli_runner.go:133] Run: docker container inspect custom-weave-20220207194241-6868 --format={{.State.Status}}
	I0207 19:56:05.421991  305415 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0207 19:56:05.422016  305415 kic_runner.go:114] Args: [docker exec --privileged custom-weave-20220207194241-6868 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0207 19:56:05.535776  305415 cli_runner.go:133] Run: docker container inspect custom-weave-20220207194241-6868 --format={{.State.Status}}
	I0207 19:56:05.579212  305415 machine.go:88] provisioning docker machine ...
	I0207 19:56:05.579264  305415 ubuntu.go:169] provisioning hostname "custom-weave-20220207194241-6868"
	I0207 19:56:05.579322  305415 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220207194241-6868
	I0207 19:56:05.630573  305415 main.go:130] libmachine: Using SSH client type: native
	I0207 19:56:05.630786  305415 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil>  [] 0s} 127.0.0.1 49424 <nil> <nil>}
	I0207 19:56:05.630810  305415 main.go:130] libmachine: About to run SSH command:
	sudo hostname custom-weave-20220207194241-6868 && echo "custom-weave-20220207194241-6868" | sudo tee /etc/hostname
	I0207 19:56:05.768553  305415 main.go:130] libmachine: SSH cmd err, output: <nil>: custom-weave-20220207194241-6868
	
	I0207 19:56:05.768623  305415 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220207194241-6868
	I0207 19:56:05.807333  305415 main.go:130] libmachine: Using SSH client type: native
	I0207 19:56:05.807504  305415 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil>  [] 0s} 127.0.0.1 49424 <nil> <nil>}
	I0207 19:56:05.807534  305415 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-weave-20220207194241-6868' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-weave-20220207194241-6868/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-weave-20220207194241-6868' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0207 19:56:05.931974  305415 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0207 19:56:05.932024  305415 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube}
	I0207 19:56:05.932057  305415 ubuntu.go:177] setting up certificates
	I0207 19:56:05.932070  305415 provision.go:83] configureAuth start
	I0207 19:56:05.932125  305415 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220207194241-6868
	I0207 19:56:05.990198  305415 provision.go:138] copyHostCerts
	I0207 19:56:05.990262  305415 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/ca.pem, removing ...
	I0207 19:56:05.990270  305415 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/ca.pem
	I0207 19:56:05.990328  305415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/ca.pem (1078 bytes)
	I0207 19:56:05.990488  305415 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cert.pem, removing ...
	I0207 19:56:05.990507  305415 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cert.pem
	I0207 19:56:05.990553  305415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cert.pem (1123 bytes)
	I0207 19:56:05.990626  305415 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/key.pem, removing ...
	I0207 19:56:05.990637  305415 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/key.pem
	I0207 19:56:05.990666  305415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/key.pem (1675 bytes)
	I0207 19:56:05.990786  305415 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca-key.pem org=jenkins.custom-weave-20220207194241-6868 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube custom-weave-20220207194241-6868]
	I0207 19:56:06.087654  305415 provision.go:172] copyRemoteCerts
	I0207 19:56:06.087710  305415 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0207 19:56:06.087749  305415 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220207194241-6868
	I0207 19:56:06.131721  305415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49424 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/custom-weave-20220207194241-6868/id_rsa Username:docker}
	I0207 19:56:06.226703  305415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0207 19:56:06.247333  305415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0207 19:56:06.270927  305415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0207 19:56:06.290560  305415 provision.go:86] duration metric: configureAuth took 358.477864ms
	I0207 19:56:06.290596  305415 ubuntu.go:193] setting minikube options for container-runtime
	I0207 19:56:06.290808  305415 config.go:176] Loaded profile config "custom-weave-20220207194241-6868": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 19:56:06.290872  305415 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220207194241-6868
	I0207 19:56:06.326582  305415 main.go:130] libmachine: Using SSH client type: native
	I0207 19:56:06.326728  305415 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil>  [] 0s} 127.0.0.1 49424 <nil> <nil>}
	I0207 19:56:06.326746  305415 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0207 19:56:06.450937  305415 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0207 19:56:06.450980  305415 ubuntu.go:71] root file system type: overlay
	I0207 19:56:06.451132  305415 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0207 19:56:06.451183  305415 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220207194241-6868
	I0207 19:56:06.489027  305415 main.go:130] libmachine: Using SSH client type: native
	I0207 19:56:06.489193  305415 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil>  [] 0s} 127.0.0.1 49424 <nil> <nil>}
	I0207 19:56:06.489258  305415 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0207 19:56:06.624088  305415 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0207 19:56:06.624181  305415 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220207194241-6868
	I0207 19:56:06.671307  305415 main.go:130] libmachine: Using SSH client type: native
	I0207 19:56:06.671514  305415 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil>  [] 0s} 127.0.0.1 49424 <nil> <nil>}
	I0207 19:56:06.671559  305415 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0207 19:56:07.543079  305415 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-12-13 11:43:42.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-02-07 19:56:06.619824635 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0207 19:56:07.543134  305415 machine.go:91] provisioned docker machine in 1.963888047s
	I0207 19:56:07.543146  305415 client.go:171] LocalClient.Create took 10.131603802s
	I0207 19:56:07.543157  305415 start.go:168] duration metric: libmachine.API.Create for "custom-weave-20220207194241-6868" took 10.131678236s
	I0207 19:56:07.543166  305415 start.go:267] post-start starting for "custom-weave-20220207194241-6868" (driver="docker")
	I0207 19:56:07.543173  305415 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0207 19:56:07.543252  305415 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0207 19:56:07.543294  305415 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220207194241-6868
	I0207 19:56:07.593186  305415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49424 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/custom-weave-20220207194241-6868/id_rsa Username:docker}
	I0207 19:56:07.686775  305415 ssh_runner.go:195] Run: cat /etc/os-release
	I0207 19:56:07.689840  305415 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0207 19:56:07.689865  305415 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0207 19:56:07.689873  305415 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0207 19:56:07.689878  305415 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0207 19:56:07.689887  305415 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/addons for local assets ...
	I0207 19:56:07.689938  305415 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/files for local assets ...
	I0207 19:56:07.690019  305415 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/files/etc/ssl/certs/68682.pem -> 68682.pem in /etc/ssl/certs
	I0207 19:56:07.690102  305415 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0207 19:56:07.697385  305415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/files/etc/ssl/certs/68682.pem --> /etc/ssl/certs/68682.pem (1708 bytes)
	I0207 19:56:07.716520  305415 start.go:270] post-start completed in 173.340619ms
	I0207 19:56:07.716929  305415 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220207194241-6868
	I0207 19:56:07.759849  305415 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/custom-weave-20220207194241-6868/config.json ...
	I0207 19:56:07.760168  305415 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0207 19:56:07.760218  305415 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220207194241-6868
	I0207 19:56:07.798282  305415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49424 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/custom-weave-20220207194241-6868/id_rsa Username:docker}
	I0207 19:56:07.887754  305415 start.go:129] duration metric: createHost completed in 10.479181631s
	I0207 19:56:07.887783  305415 start.go:80] releasing machines lock for "custom-weave-20220207194241-6868", held for 10.479336339s
	I0207 19:56:07.887855  305415 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220207194241-6868
	I0207 19:56:07.922524  305415 ssh_runner.go:195] Run: systemctl --version
	I0207 19:56:07.922573  305415 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220207194241-6868
	I0207 19:56:07.922579  305415 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0207 19:56:07.922630  305415 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220207194241-6868
	I0207 19:56:07.975562  305415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49424 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/custom-weave-20220207194241-6868/id_rsa Username:docker}
	I0207 19:56:07.977357  305415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49424 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/custom-weave-20220207194241-6868/id_rsa Username:docker}
	I0207 19:56:08.063775  305415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0207 19:56:08.093559  305415 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0207 19:56:08.103926  305415 cruntime.go:272] skipping containerd shutdown because we are bound to it
	I0207 19:56:08.103999  305415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0207 19:56:08.114637  305415 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0207 19:56:08.128963  305415 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0207 19:56:08.223507  305415 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0207 19:56:08.313519  305415 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0207 19:56:08.348470  305415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0207 19:56:08.529046  305415 ssh_runner.go:195] Run: sudo systemctl start docker
	I0207 19:56:08.540687  305415 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0207 19:56:08.590179  305415 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0207 19:56:08.642298  305415 out.go:203] * Preparing Kubernetes v1.23.3 on Docker 20.10.12 ...
	I0207 19:56:08.642408  305415 cli_runner.go:133] Run: docker network inspect custom-weave-20220207194241-6868 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0207 19:56:08.688744  305415 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0207 19:56:08.692288  305415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0207 19:56:08.704555  305415 out.go:176]   - kubelet.housekeeping-interval=5m
	I0207 19:56:08.704646  305415 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 19:56:08.704705  305415 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0207 19:56:08.744915  305415 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.3
	k8s.gcr.io/kube-scheduler:v1.23.3
	k8s.gcr.io/kube-controller-manager:v1.23.3
	k8s.gcr.io/kube-proxy:v1.23.3
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0207 19:56:08.744945  305415 docker.go:537] Images already preloaded, skipping extraction
	I0207 19:56:08.745000  305415 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0207 19:56:08.786034  305415 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.3
	k8s.gcr.io/kube-controller-manager:v1.23.3
	k8s.gcr.io/kube-proxy:v1.23.3
	k8s.gcr.io/kube-scheduler:v1.23.3
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0207 19:56:08.786066  305415 cache_images.go:84] Images are preloaded, skipping loading
	I0207 19:56:08.786121  305415 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0207 19:56:08.892392  305415 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0207 19:56:08.892436  305415 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0207 19:56:08.892454  305415 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-weave-20220207194241-6868 NodeName:custom-weave-20220207194241-6868 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0207 19:56:08.892703  305415 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "custom-weave-20220207194241-6868"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0207 19:56:08.892826  305415 kubeadm.go:935] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=custom-weave-20220207194241-6868 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.3 ClusterName:custom-weave-20220207194241-6868 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:}
	I0207 19:56:08.892890  305415 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.3
	I0207 19:56:08.902078  305415 binaries.go:44] Found k8s binaries, skipping transfer
	I0207 19:56:08.902159  305415 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0207 19:56:08.910269  305415 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (406 bytes)
	I0207 19:56:08.924493  305415 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0207 19:56:08.938536  305415 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2054 bytes)
	I0207 19:56:08.960474  305415 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0207 19:56:08.966037  305415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0207 19:56:08.977707  305415 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/custom-weave-20220207194241-6868 for IP: 192.168.49.2
	I0207 19:56:08.977822  305415 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/ca.key
	I0207 19:56:08.977864  305415 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/proxy-client-ca.key
	I0207 19:56:08.977922  305415 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/custom-weave-20220207194241-6868/client.key
	I0207 19:56:08.977940  305415 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/custom-weave-20220207194241-6868/client.crt with IP's: []
	I0207 19:56:09.093138  305415 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/custom-weave-20220207194241-6868/client.crt ...
	I0207 19:56:09.093173  305415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/custom-weave-20220207194241-6868/client.crt: {Name:mkc707a20d72ee4297a326de9e5a4e0171918eed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:56:09.093394  305415 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/custom-weave-20220207194241-6868/client.key ...
	I0207 19:56:09.093416  305415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/custom-weave-20220207194241-6868/client.key: {Name:mk2e301a541e1d9aaee7e8888ba9b79d26cc476e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:56:09.093561  305415 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/custom-weave-20220207194241-6868/apiserver.key.dd3b5fb2
	I0207 19:56:09.093581  305415 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/custom-weave-20220207194241-6868/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0207 19:56:09.154132  305415 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/custom-weave-20220207194241-6868/apiserver.crt.dd3b5fb2 ...
	I0207 19:56:09.154166  305415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/custom-weave-20220207194241-6868/apiserver.crt.dd3b5fb2: {Name:mk737883885f2272ddcdd82c9c4563f8c0c1a1ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:56:09.154398  305415 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/custom-weave-20220207194241-6868/apiserver.key.dd3b5fb2 ...
	I0207 19:56:09.154425  305415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/custom-weave-20220207194241-6868/apiserver.key.dd3b5fb2: {Name:mk07b7f1cd0addfad23897d9fef88fcef24abb19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:56:09.154542  305415 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/custom-weave-20220207194241-6868/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/custom-weave-20220207194241-6868/apiserver.crt
	I0207 19:56:09.154607  305415 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/custom-weave-20220207194241-6868/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/custom-weave-20220207194241-6868/apiserver.key
	I0207 19:56:09.154663  305415 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/custom-weave-20220207194241-6868/proxy-client.key
	I0207 19:56:09.154680  305415 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/custom-weave-20220207194241-6868/proxy-client.crt with IP's: []
	I0207 19:56:09.462039  305415 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/custom-weave-20220207194241-6868/proxy-client.crt ...
	I0207 19:56:09.462074  305415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/custom-weave-20220207194241-6868/proxy-client.crt: {Name:mk412a7ddd8eff5af1563f984d677b686cfeae94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:56:09.462298  305415 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/custom-weave-20220207194241-6868/proxy-client.key ...
	I0207 19:56:09.462444  305415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/custom-weave-20220207194241-6868/proxy-client.key: {Name:mk6e2d79e09bd79a36e3dd0193f7ce93d6c23d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:56:09.463037  305415 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/6868.pem (1338 bytes)
	W0207 19:56:09.463100  305415 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/6868_empty.pem, impossibly tiny 0 bytes
	I0207 19:56:09.463118  305415 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca-key.pem (1675 bytes)
	I0207 19:56:09.463160  305415 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca.pem (1078 bytes)
	I0207 19:56:09.463201  305415 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/cert.pem (1123 bytes)
	I0207 19:56:09.463233  305415 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/key.pem (1675 bytes)
	I0207 19:56:09.463294  305415 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/files/etc/ssl/certs/68682.pem (1708 bytes)
	I0207 19:56:09.464921  305415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/custom-weave-20220207194241-6868/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0207 19:56:09.486104  305415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/custom-weave-20220207194241-6868/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0207 19:56:09.509724  305415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/custom-weave-20220207194241-6868/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0207 19:56:09.531806  305415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/custom-weave-20220207194241-6868/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0207 19:56:09.552742  305415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0207 19:56:09.574500  305415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0207 19:56:09.597085  305415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0207 19:56:09.617898  305415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0207 19:56:09.637746  305415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/files/etc/ssl/certs/68682.pem --> /usr/share/ca-certificates/68682.pem (1708 bytes)
	I0207 19:56:09.659499  305415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0207 19:56:09.680477  305415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/6868.pem --> /usr/share/ca-certificates/6868.pem (1338 bytes)
	I0207 19:56:09.699532  305415 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0207 19:56:09.713362  305415 ssh_runner.go:195] Run: openssl version
	I0207 19:56:09.718423  305415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/68682.pem && ln -fs /usr/share/ca-certificates/68682.pem /etc/ssl/certs/68682.pem"
	I0207 19:56:09.726519  305415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/68682.pem
	I0207 19:56:09.729733  305415 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb  7 19:21 /usr/share/ca-certificates/68682.pem
	I0207 19:56:09.729795  305415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/68682.pem
	I0207 19:56:09.734892  305415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/68682.pem /etc/ssl/certs/3ec20f2e.0"
	I0207 19:56:09.742795  305415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0207 19:56:09.750714  305415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0207 19:56:09.754187  305415 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb  7 19:17 /usr/share/ca-certificates/minikubeCA.pem
	I0207 19:56:09.754253  305415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0207 19:56:09.759784  305415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0207 19:56:09.768728  305415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6868.pem && ln -fs /usr/share/ca-certificates/6868.pem /etc/ssl/certs/6868.pem"
	I0207 19:56:09.777117  305415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6868.pem
	I0207 19:56:09.780559  305415 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb  7 19:21 /usr/share/ca-certificates/6868.pem
	I0207 19:56:09.780611  305415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6868.pem
	I0207 19:56:09.785643  305415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6868.pem /etc/ssl/certs/51391683.0"
	I0207 19:56:09.793675  305415 kubeadm.go:390] StartCluster: {Name:custom-weave-20220207194241-6868 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:custom-weave-20220207194241-6868 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0207 19:56:09.793816  305415 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0207 19:56:09.827483  305415 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0207 19:56:09.835381  305415 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0207 19:56:09.843353  305415 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0207 19:56:09.843454  305415 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0207 19:56:09.851748  305415 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0207 19:56:09.851795  305415 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0207 19:56:21.795906  305415 out.go:203]   - Generating certificates and keys ...
	I0207 19:56:21.799251  305415 out.go:203]   - Booting up control plane ...
	I0207 19:56:21.802178  305415 out.go:203]   - Configuring RBAC rules ...
	I0207 19:56:21.804704  305415 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0207 19:56:21.806572  305415 out.go:176] * Configuring testdata/weavenet.yaml (Container Networking Interface) ...
	I0207 19:56:21.806674  305415 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.3/kubectl ...
	I0207 19:56:21.806728  305415 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I0207 19:56:21.847218  305415 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/tmp/minikube/cni.yaml': No such file or directory
	I0207 19:56:21.847253  305415 ssh_runner.go:362] scp testdata/weavenet.yaml --> /var/tmp/minikube/cni.yaml (10948 bytes)
	I0207 19:56:21.885888  305415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0207 19:56:23.340637  305415 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.454707658s)
	I0207 19:56:23.340713  305415 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0207 19:56:23.340808  305415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:23.340808  305415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=68b41900649d825bc98a620f335c8941b16741bb minikube.k8s.io/name=custom-weave-20220207194241-6868 minikube.k8s.io/updated_at=2022_02_07T19_56_23_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:23.457728  305415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:23.457738  305415 ops.go:34] apiserver oom_adj: -16
	I0207 19:56:24.031233  305415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:24.531094  305415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:25.030672  305415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:25.531061  305415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:26.031038  305415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:26.531549  305415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:27.031555  305415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:27.531196  305415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:28.030690  305415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:28.531446  305415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:29.031498  305415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:29.530980  305415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:30.031469  305415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:30.530974  305415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:31.031587  305415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:31.531548  305415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:32.030918  305415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:32.530941  305415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:33.031565  305415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:33.531395  305415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:34.031653  305415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:34.531295  305415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:56:34.668848  305415 kubeadm.go:1019] duration metric: took 11.328090587s to wait for elevateKubeSystemPrivileges.
	I0207 19:56:34.668885  305415 kubeadm.go:392] StartCluster complete in 24.875216722s
	I0207 19:56:34.668904  305415 settings.go:142] acquiring lock: {Name:mk7529dd3428fdf27408cc6b278cb5c7b03413f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:56:34.669008  305415 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/kubeconfig
	I0207 19:56:34.671682  305415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/kubeconfig: {Name:mkd7bc53058a925fccbecd7920bc22204f3abc89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:56:35.197656  305415 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "custom-weave-20220207194241-6868" rescaled to 1
	I0207 19:56:35.197725  305415 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0207 19:56:35.199840  305415 out.go:176] * Verifying Kubernetes components...
	I0207 19:56:35.197784  305415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0207 19:56:35.199907  305415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0207 19:56:35.198040  305415 config.go:176] Loaded profile config "custom-weave-20220207194241-6868": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 19:56:35.198059  305415 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0207 19:56:35.200094  305415 addons.go:65] Setting storage-provisioner=true in profile "custom-weave-20220207194241-6868"
	I0207 19:56:35.200120  305415 addons.go:153] Setting addon storage-provisioner=true in "custom-weave-20220207194241-6868"
	I0207 19:56:35.200124  305415 addons.go:65] Setting default-storageclass=true in profile "custom-weave-20220207194241-6868"
	W0207 19:56:35.200132  305415 addons.go:165] addon storage-provisioner should already be in state true
	I0207 19:56:35.200141  305415 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-weave-20220207194241-6868"
	I0207 19:56:35.200169  305415 host.go:66] Checking if "custom-weave-20220207194241-6868" exists ...
	I0207 19:56:35.200512  305415 cli_runner.go:133] Run: docker container inspect custom-weave-20220207194241-6868 --format={{.State.Status}}
	I0207 19:56:35.200737  305415 cli_runner.go:133] Run: docker container inspect custom-weave-20220207194241-6868 --format={{.State.Status}}
	I0207 19:56:35.233534  305415 node_ready.go:35] waiting up to 5m0s for node "custom-weave-20220207194241-6868" to be "Ready" ...
	I0207 19:56:35.238795  305415 node_ready.go:49] node "custom-weave-20220207194241-6868" has status "Ready":"True"
	I0207 19:56:35.238820  305415 node_ready.go:38] duration metric: took 5.245112ms waiting for node "custom-weave-20220207194241-6868" to be "Ready" ...
	I0207 19:56:35.238833  305415 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0207 19:56:35.264831  305415 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-xtzsf" in "kube-system" namespace to be "Ready" ...
	I0207 19:56:35.271701  305415 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0207 19:56:35.271867  305415 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0207 19:56:35.271884  305415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0207 19:56:35.271945  305415 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220207194241-6868
	I0207 19:56:35.287564  305415 addons.go:153] Setting addon default-storageclass=true in "custom-weave-20220207194241-6868"
	W0207 19:56:35.287600  305415 addons.go:165] addon default-storageclass should already be in state true
	I0207 19:56:35.287632  305415 host.go:66] Checking if "custom-weave-20220207194241-6868" exists ...
	I0207 19:56:35.288141  305415 cli_runner.go:133] Run: docker container inspect custom-weave-20220207194241-6868 --format={{.State.Status}}
	I0207 19:56:35.329249  305415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49424 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/custom-weave-20220207194241-6868/id_rsa Username:docker}
	I0207 19:56:35.340194  305415 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0207 19:56:35.340224  305415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0207 19:56:35.340278  305415 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220207194241-6868
	I0207 19:56:35.387558  305415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0207 19:56:35.397088  305415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49424 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/custom-weave-20220207194241-6868/id_rsa Username:docker}
	I0207 19:56:35.562869  305415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0207 19:56:35.749998  305415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0207 19:56:36.250756  305415 start.go:777] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0207 19:56:36.460828  305415 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0207 19:56:36.460862  305415 addons.go:417] enableAddons completed in 1.262804555s
	I0207 19:56:37.293065  305415 pod_ready.go:102] pod "coredns-64897985d-xtzsf" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:39.296557  305415 pod_ready.go:102] pod "coredns-64897985d-xtzsf" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:41.289924  305415 pod_ready.go:97] error getting pod "coredns-64897985d-xtzsf" in "kube-system" namespace (skipping!): pods "coredns-64897985d-xtzsf" not found
	I0207 19:56:41.289961  305415 pod_ready.go:81] duration metric: took 6.024595526s waiting for pod "coredns-64897985d-xtzsf" in "kube-system" namespace to be "Ready" ...
	E0207 19:56:41.289973  305415 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-64897985d-xtzsf" in "kube-system" namespace (skipping!): pods "coredns-64897985d-xtzsf" not found
	I0207 19:56:41.289981  305415 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-xz5c6" in "kube-system" namespace to be "Ready" ...
	I0207 19:56:43.301097  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:45.301464  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:47.301923  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:49.306976  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:51.802301  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:54.304521  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:56.305362  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:58.803732  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:01.301919  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:03.801465  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:05.802479  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:08.301835  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:10.302384  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:12.800455  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:14.801859  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:17.301585  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:19.305158  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:21.802009  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:23.802132  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:26.300890  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:28.801097  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:30.801280  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:32.801894  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:35.301724  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:37.301940  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:39.302021  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:41.302973  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:43.800539  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:45.801417  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:48.300591  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:50.301775  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:52.801155  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:55.301793  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:57:57.801954  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:00.307532  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:02.801946  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:05.301867  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:07.302137  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:09.801358  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:11.802304  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:14.301802  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:16.302148  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:18.801947  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:21.301926  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:23.801995  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:26.300400  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:28.300595  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:30.301485  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:32.801561  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:34.801924  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:37.300674  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:39.801629  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:42.299854  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:44.300535  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:46.300692  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:48.301661  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:50.802399  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:53.301160  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:55.301848  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:58:57.800966  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:00.300712  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:02.301693  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:04.800430  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:06.801005  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:09.302252  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:11.801614  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:14.302028  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:16.802053  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:19.300702  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:21.301758  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:23.800660  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:25.801217  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:27.801906  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:30.300334  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:32.301431  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:34.800832  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:36.801270  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:38.801675  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:41.300953  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:43.301706  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:45.800285  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:47.801114  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:49.801183  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:52.300889  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:54.301624  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:56.801897  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:59:59.299638  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:01.300383  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:03.300520  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:05.301053  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:07.800489  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:09.801232  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:12.300920  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:14.800499  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:16.800948  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:19.300607  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:21.301626  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:23.801572  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:26.302435  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:28.801850  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:31.301215  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:33.800125  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:35.800899  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:37.801153  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:39.801501  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:41.304091  305415 pod_ready.go:81] duration metric: took 4m0.014097063s waiting for pod "coredns-64897985d-xz5c6" in "kube-system" namespace to be "Ready" ...
	E0207 20:00:41.304117  305415 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0207 20:00:41.304127  305415 pod_ready.go:78] waiting up to 5m0s for pod "etcd-custom-weave-20220207194241-6868" in "kube-system" namespace to be "Ready" ...
	I0207 20:00:41.308348  305415 pod_ready.go:92] pod "etcd-custom-weave-20220207194241-6868" in "kube-system" namespace has status "Ready":"True"
	I0207 20:00:41.308368  305415 pod_ready.go:81] duration metric: took 4.232875ms waiting for pod "etcd-custom-weave-20220207194241-6868" in "kube-system" namespace to be "Ready" ...
	I0207 20:00:41.308379  305415 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-custom-weave-20220207194241-6868" in "kube-system" namespace to be "Ready" ...
	I0207 20:00:41.312242  305415 pod_ready.go:92] pod "kube-apiserver-custom-weave-20220207194241-6868" in "kube-system" namespace has status "Ready":"True"
	I0207 20:00:41.312255  305415 pod_ready.go:81] duration metric: took 3.86945ms waiting for pod "kube-apiserver-custom-weave-20220207194241-6868" in "kube-system" namespace to be "Ready" ...
	I0207 20:00:41.312264  305415 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-custom-weave-20220207194241-6868" in "kube-system" namespace to be "Ready" ...
	I0207 20:00:41.316253  305415 pod_ready.go:92] pod "kube-controller-manager-custom-weave-20220207194241-6868" in "kube-system" namespace has status "Ready":"True"
	I0207 20:00:41.316274  305415 pod_ready.go:81] duration metric: took 4.000914ms waiting for pod "kube-controller-manager-custom-weave-20220207194241-6868" in "kube-system" namespace to be "Ready" ...
	I0207 20:00:41.316286  305415 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-4jmfx" in "kube-system" namespace to be "Ready" ...
	I0207 20:00:41.698784  305415 pod_ready.go:92] pod "kube-proxy-4jmfx" in "kube-system" namespace has status "Ready":"True"
	I0207 20:00:41.698808  305415 pod_ready.go:81] duration metric: took 382.51419ms waiting for pod "kube-proxy-4jmfx" in "kube-system" namespace to be "Ready" ...
	I0207 20:00:41.698821  305415 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-custom-weave-20220207194241-6868" in "kube-system" namespace to be "Ready" ...
	I0207 20:00:42.098560  305415 pod_ready.go:92] pod "kube-scheduler-custom-weave-20220207194241-6868" in "kube-system" namespace has status "Ready":"True"
	I0207 20:00:42.098585  305415 pod_ready.go:81] duration metric: took 399.755538ms waiting for pod "kube-scheduler-custom-weave-20220207194241-6868" in "kube-system" namespace to be "Ready" ...
	I0207 20:00:42.098599  305415 pod_ready.go:78] waiting up to 5m0s for pod "weave-net-spwvw" in "kube-system" namespace to be "Ready" ...
	I0207 20:00:44.504392  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:47.003648  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:49.504751  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:52.004690  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:54.005293  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:56.505282  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:00:58.505515  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:01.005358  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:03.504608  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:05.504936  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:08.004553  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:10.504958  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:12.505197  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:15.004829  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:17.006878  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:19.504208  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:22.004553  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:24.005161  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:26.008945  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:28.506064  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:31.011576  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:33.504611  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:36.004537  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:38.004598  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:40.504650  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:42.505984  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:45.004782  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:47.504822  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:49.505026  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:52.005639  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:54.504994  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:56.505070  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:01:58.506611  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:01.006401  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:03.505244  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:06.005070  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:08.005940  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:10.505805  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:13.005314  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:15.006296  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:17.504831  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:20.005292  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:22.006301  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:24.010224  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:26.504121  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:28.504695  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:30.504762  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:32.505815  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:35.004921  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:37.503729  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:39.505331  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:42.004209  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:44.004280  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:46.505011  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:48.507611  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:51.003659  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:53.005479  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:55.505488  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:02:58.005393  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:00.504606  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:02.504775  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:04.505416  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:07.005376  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:09.005549  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:11.504438  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:14.004331  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:16.005555  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:18.505280  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:20.507011  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:23.004862  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:25.005044  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:27.009576  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:29.504870  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:31.511672  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:34.004236  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:36.004505  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:38.007042  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:40.506007  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:43.005255  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:45.005478  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:47.505017  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:50.005109  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:52.006043  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:54.504124  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:56.504822  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:03:59.004172  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:04:01.007295  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:04:03.504535  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:04:06.004815  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:04:08.004952  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:04:10.006070  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:04:12.504984  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:04:14.505027  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:04:17.004160  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:04:19.504284  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:04:22.004047  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:04:24.504386  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:04:27.004553  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:04:29.503654  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:04:31.504233  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:04:34.003594  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:04:36.003874  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:04:38.504139  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:04:40.505221  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:04:42.508181  305415 pod_ready.go:102] pod "weave-net-spwvw" in "kube-system" namespace has status "Ready":"False"
	I0207 20:04:42.508221  305415 pod_ready.go:81] duration metric: took 4m0.409615215s waiting for pod "weave-net-spwvw" in "kube-system" namespace to be "Ready" ...
	E0207 20:04:42.508229  305415 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0207 20:04:42.508233  305415 pod_ready.go:38] duration metric: took 8m7.26938775s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0207 20:04:42.508255  305415 api_server.go:51] waiting for apiserver process to appear ...
	I0207 20:04:42.510649  305415 out.go:176] 
	W0207 20:04:42.510754  305415 out.go:241] X Exiting due to K8S_APISERVER_MISSING: wait 5m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 5m0s for node: wait for apiserver proc: apiserver process never appeared
	W0207 20:04:42.510822  305415 out.go:241] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W0207 20:04:42.510834  305415 out.go:241] * Related issues:
	* Related issues:
	W0207 20:04:42.510873  305415 out.go:241]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W0207 20:04:42.510930  305415 out.go:241]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I0207 20:04:42.512829  305415 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 105
--- FAIL: TestNetworkPlugins/group/custom-weave/Start (525.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (7.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20220207194436-6868 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-20220207194436-6868 --alsologtostderr -v=1: exit status 80 (1.797359292s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-20220207194436-6868 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0207 19:56:45.856360  321643 out.go:297] Setting OutFile to fd 1 ...
	I0207 19:56:45.856461  321643 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:56:45.856472  321643 out.go:310] Setting ErrFile to fd 2...
	I0207 19:56:45.856479  321643 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:56:45.856628  321643 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/bin
	I0207 19:56:45.856827  321643 out.go:304] Setting JSON to false
	I0207 19:56:45.856846  321643 mustload.go:65] Loading cluster: old-k8s-version-20220207194436-6868
	I0207 19:56:45.857345  321643 config.go:176] Loaded profile config "old-k8s-version-20220207194436-6868": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0207 19:56:45.857944  321643 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220207194436-6868 --format={{.State.Status}}
	I0207 19:56:45.901948  321643 host.go:66] Checking if "old-k8s-version-20220207194436-6868" exists ...
	I0207 19:56:45.902323  321643 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 19:56:46.054218  321643 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:48 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:62 SystemTime:2022-02-07 19:56:45.971582865 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663643648 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0207 19:56:46.054928  321643 pause.go:58] "namespaces" ="keys" ="(MISSING)"
	I0207 19:56:46.057674  321643 out.go:176] * Pausing node old-k8s-version-20220207194436-6868 ... 
	I0207 19:56:46.057704  321643 host.go:66] Checking if "old-k8s-version-20220207194436-6868" exists ...
	I0207 19:56:46.058070  321643 ssh_runner.go:195] Run: systemctl --version
	I0207 19:56:46.058123  321643 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220207194436-6868
	I0207 19:56:46.104800  321643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49404 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/old-k8s-version-20220207194436-6868/id_rsa Username:docker}
	I0207 19:56:46.199817  321643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0207 19:56:46.211418  321643 pause.go:50] kubelet running: true
	I0207 19:56:46.211491  321643 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0207 19:56:46.392461  321643 retry.go:31] will retry after 276.165072ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0207 19:56:46.669841  321643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0207 19:56:46.689318  321643 pause.go:50] kubelet running: true
	I0207 19:56:46.689384  321643 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0207 19:56:46.874092  321643 retry.go:31] will retry after 540.190908ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0207 19:56:47.414486  321643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0207 19:56:47.426050  321643 pause.go:50] kubelet running: true
	I0207 19:56:47.426117  321643 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0207 19:56:47.562921  321643 out.go:176] 
	W0207 19:56:47.563103  321643 out.go:241] X Exiting due to GUEST_PAUSE: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	
	X Exiting due to GUEST_PAUSE: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	
	W0207 19:56:47.563120  321643 out.go:241] * 
	* 
	W0207 19:56:47.565400  321643 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0207 19:56:47.566984  321643 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:296: out/minikube-linux-amd64 pause -p old-k8s-version-20220207194436-6868 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20220207194436-6868
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20220207194436-6868:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "648fc6fa2728d10dc431b66fd18bf4415121c9acdd55486cda8b6ae43e2f66af",
	        "Created": "2022-02-07T19:54:28.823618047Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 277987,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-02-07T19:54:29.297605934Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:50384aa4ebef3abc81b3b83296147bd747dcd04d4644d8f3150476ffa93e6889",
	        "ResolvConfPath": "/var/lib/docker/containers/648fc6fa2728d10dc431b66fd18bf4415121c9acdd55486cda8b6ae43e2f66af/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/648fc6fa2728d10dc431b66fd18bf4415121c9acdd55486cda8b6ae43e2f66af/hostname",
	        "HostsPath": "/var/lib/docker/containers/648fc6fa2728d10dc431b66fd18bf4415121c9acdd55486cda8b6ae43e2f66af/hosts",
	        "LogPath": "/var/lib/docker/containers/648fc6fa2728d10dc431b66fd18bf4415121c9acdd55486cda8b6ae43e2f66af/648fc6fa2728d10dc431b66fd18bf4415121c9acdd55486cda8b6ae43e2f66af-json.log",
	        "Name": "/old-k8s-version-20220207194436-6868",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-20220207194436-6868:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220207194436-6868",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ecf4fffac5f9789e79a9e9309d410fd3ac70dfa9c7a613169d65c52951e3686e-init/diff:/var/lib/docker/overlay2/40e36e3239cb5157195ce223d31e5e12299d283013c03c510d3e8a2442fd2c92/diff:/var/lib/docker/overlay2/21617b479acf17653e84d6ae3cb822db5c7eac887dbffb288d5171c45b712c0d/diff:/var/lib/docker/overlay2/2dbc01d4f6abd3524aaa75f3f362b44291e07e9adaadba323bd734a77bfa9c6a/diff:/var/lib/docker/overlay2/1c3968298265a3203685852a8c6fa391e12253b485741654087afb7a90fc1d77/diff:/var/lib/docker/overlay2/6a2a8c5d6504d982da53621a1d6f96ee3336c19fd9f294d5b418cc706dc8944c/diff:/var/lib/docker/overlay2/7e7a079457982ab93f984a944ffef8ef6a0aedcf9ae87dd48d2bfaebfa401212/diff:/var/lib/docker/overlay2/fae622e4af16ac53e0d1ab6e7ec0b23cddddaf4c7b9c906b18db9f5a7421f38d/diff:/var/lib/docker/overlay2/d4355831ba7c15624e8cc51f64415d91ec01d79fc16f0d8cce7cf9819963c9be/diff:/var/lib/docker/overlay2/5453a1a1be3960eaab33a3909934d20d3b1f1d0bd01d04e14158548e63d9ccc7/diff:/var/lib/docker/overlay2/b7f7aa
f98954a80aedd0a57753ced767fc40fd261655975f8bb2201f533af508/diff:/var/lib/docker/overlay2/582d45c1dfa23d0fcf227689ca05cc54f60cdf8562c7df098f15c0596f9f3b84/diff:/var/lib/docker/overlay2/97921dc2ea2a25724aa5bc8ee71d705ad02bb5de7327e9125b14e7ed3e0a36d9/diff:/var/lib/docker/overlay2/8994377961c9baa6fdb05a49604c2c1639c56f117040ce16cfcd7068142802d0/diff:/var/lib/docker/overlay2/741d31f19db93cecb47cf3edf12208c50adfa881f267e46fc2200168359e063e/diff:/var/lib/docker/overlay2/be1305b93735b2cb41c1050a14599a08f09c103ef39104313e5c6ea7783a25d0/diff:/var/lib/docker/overlay2/d2c6406a44063188bff06eacfb837bce43d713aa16c08f434607947a2e2aeb2d/diff:/var/lib/docker/overlay2/2354e37c2793df3a7faa18542aa5d3030952a40a0dd4361a9ad132d57efd3dea/diff:/var/lib/docker/overlay2/82b71b4192e75ce019792a62b12c4d48d3352cd8295673aa7b75c929d0c7f4ae/diff:/var/lib/docker/overlay2/6c62b320b27e5a2c13eea8d9b6e430fb56485a76ac7bf171136df923f96334b6/diff:/var/lib/docker/overlay2/f65c213239b185d01f445a11f073325d0aa4a30296ee7125aeec4abc8b80289e/diff:/var/lib/d
ocker/overlay2/f4ab87d7e9bbbf343135421546bd636317abbc0406bd09bc0e7ded9abb5ffe07/diff:/var/lib/docker/overlay2/c962dce8dce172c66b9fae4d0533e0b9eb6f537f99f2ae091522820f3437e87b/diff:/var/lib/docker/overlay2/c5f3b750eb1f675794758011aa1f3cf1afaaea6aeabaacfa7127c4e8eb3e9d3f/diff:/var/lib/docker/overlay2/165d7a930e1764d6612409e5b2abab0706c771e2ea6d53d26f379e5c8420b768/diff:/var/lib/docker/overlay2/c639594ead9cef5a157dcd6c5d3b58acfb87a1b54e09f09a89e5efe42a0250cb/diff:/var/lib/docker/overlay2/22d4ffdeda2486e79e77cdf6b2966c4e3f7a7c1d385f6914cf9abbbafd681fc5/diff:/var/lib/docker/overlay2/06347ddaa20c499bc26010d7a1ef1ac9c484d7088bac49bc47d017af272c5c8b/diff:/var/lib/docker/overlay2/4039a84be3e1b1c0c36b2bd5611308130efae8b5d3993d514489c326b58181a2/diff:/var/lib/docker/overlay2/00ba3d7351a8d15c1f38c8a5267ac7da1315950a1583dfe162bbe06e240d4e4e/diff:/var/lib/docker/overlay2/b66091d419eb3b0a03f2363973ab6750206d5cb1e33c6a80f22ac7b1b1c20015/diff:/var/lib/docker/overlay2/60a3c3f90313e57450868dd29163b9746391dbc376387ee61b371e7753d
2a9ed/diff:/var/lib/docker/overlay2/a4077b320de983a23a73f3509a3b65aa35c912b90e61cf3446d45334952197cc/diff:/var/lib/docker/overlay2/87466c009c98c77512f99106ac7b5b4682f6d57d0895993878a55843dfde4f0a/diff:/var/lib/docker/overlay2/be9cd77fbde8968efd17d63e6bf10bab9ae227bf6efd5ff15488effa8ed534f4/diff:/var/lib/docker/overlay2/692a8a7c4d738fb8caee425a6243fdaf5a5c4e7fdb6bda1969cba3c7099060d9/diff:/var/lib/docker/overlay2/90779bbe942cebdf0402a74acd25799917448b7948891aaf60636bbb4410e2d5/diff:/var/lib/docker/overlay2/f403aa656638a54017c9beeb448df9b3957711bbf52e5e92e279dd6a8e3a1a7b/diff:/var/lib/docker/overlay2/3e3a096efd54b9035c41e17e3c469d848ce1cddc9ad895ed288525a89e7d5153/diff:/var/lib/docker/overlay2/71a400a65bb51da094b9d5b672bf3e4973957a356b0480e8fd559aa527c64638/diff:/var/lib/docker/overlay2/5ecbee969df6610687807dc48c221a03964af0e197a0b8f0b5c38b70ab38cf4c/diff:/var/lib/docker/overlay2/1f806f3d9e1cd280380c82dd805cd7489ed4ed1d66b824ad880754d19b08dfa2/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ecf4fffac5f9789e79a9e9309d410fd3ac70dfa9c7a613169d65c52951e3686e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ecf4fffac5f9789e79a9e9309d410fd3ac70dfa9c7a613169d65c52951e3686e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ecf4fffac5f9789e79a9e9309d410fd3ac70dfa9c7a613169d65c52951e3686e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220207194436-6868",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220207194436-6868/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220207194436-6868",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220207194436-6868",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220207194436-6868",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a67d8e6f3789107f066d09286a0b5214bcaae83bc2a80b9924fab00697b4c00d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49404"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49403"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49400"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49402"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49401"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a67d8e6f3789",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220207194436-6868": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "648fc6fa2728",
	                        "old-k8s-version-20220207194436-6868"
	                    ],
	                    "NetworkID": "da5de09917c1f3425d4cc609c0dd233cf5a9c621fdf0b0419beb9a21ca45fdd7",
	                    "EndpointID": "2d33c4f567ebc4a7d665d0b45135cc035ce9cc53cd234a039f116fba62731685",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220207194436-6868 -n old-k8s-version-20220207194436-6868
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20220207194436-6868 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-20220207194436-6868 logs -n 25: (1.505897117s)
helpers_test.go:253: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                      Args                      |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                             | newest-cni-20220207195220-6868                 | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:53:34 UTC | Mon, 07 Feb 2022 19:53:37 UTC |
	|         | newest-cni-20220207195220-6868                 |                                                |         |         |                               |                               |
	| delete  | -p                                             | newest-cni-20220207195220-6868                 | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:53:37 UTC | Mon, 07 Feb 2022 19:53:37 UTC |
	|         | newest-cni-20220207195220-6868                 |                                                |         |         |                               |                               |
	| start   | -p no-preload-20220207194713-6868              | no-preload-20220207194713-6868                 | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:48:39 UTC | Mon, 07 Feb 2022 19:54:17 UTC |
	|         | --memory=2200 --alsologtostderr                |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                    |                                                |         |         |                               |                               |
	|         | --driver=docker                                |                                                |         |         |                               |                               |
	|         | --container-runtime=docker                     |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.4-rc.0              |                                                |         |         |                               |                               |
	| start   | -p auto-20220207194241-6868                    | auto-20220207194241-6868                       | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:53:37 UTC | Mon, 07 Feb 2022 19:54:21 UTC |
	|         | --memory=2048                                  |                                                |         |         |                               |                               |
	|         | --alsologtostderr                              |                                                |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                  |                                                |         |         |                               |                               |
	|         | --driver=docker                                |                                                |         |         |                               |                               |
	|         | --container-runtime=docker                     |                                                |         |         |                               |                               |
	| ssh     | -p auto-20220207194241-6868                    | auto-20220207194241-6868                       | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:54:21 UTC | Mon, 07 Feb 2022 19:54:22 UTC |
	|         | pgrep -a kubelet                               |                                                |         |         |                               |                               |
	| ssh     | -p                                             | no-preload-20220207194713-6868                 | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:54:36 UTC | Mon, 07 Feb 2022 19:54:36 UTC |
	|         | no-preload-20220207194713-6868                 |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                     |                                                |         |         |                               |                               |
	| pause   | -p                                             | no-preload-20220207194713-6868                 | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:54:36 UTC | Mon, 07 Feb 2022 19:54:37 UTC |
	|         | no-preload-20220207194713-6868                 |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                         |                                                |         |         |                               |                               |
	| unpause | -p                                             | no-preload-20220207194713-6868                 | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:54:38 UTC | Mon, 07 Feb 2022 19:54:39 UTC |
	|         | no-preload-20220207194713-6868                 |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                         |                                                |         |         |                               |                               |
	| delete  | -p                                             | no-preload-20220207194713-6868                 | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:54:40 UTC | Mon, 07 Feb 2022 19:54:44 UTC |
	|         | no-preload-20220207194713-6868                 |                                                |         |         |                               |                               |
	| delete  | -p auto-20220207194241-6868                    | auto-20220207194241-6868                       | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:54:41 UTC | Mon, 07 Feb 2022 19:54:44 UTC |
	| delete  | -p                                             | no-preload-20220207194713-6868                 | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:54:44 UTC | Mon, 07 Feb 2022 19:54:44 UTC |
	|         | no-preload-20220207194713-6868                 |                                                |         |         |                               |                               |
	| start   | -p                                             | default-k8s-different-port-20220207194800-6868 | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:49:18 UTC | Mon, 07 Feb 2022 19:55:02 UTC |
	|         | default-k8s-different-port-20220207194800-6868 |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                |                                                |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444              |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=docker    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.3                   |                                                |         |         |                               |                               |
	| ssh     | -p                                             | default-k8s-different-port-20220207194800-6868 | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:55:13 UTC | Mon, 07 Feb 2022 19:55:14 UTC |
	|         | default-k8s-different-port-20220207194800-6868 |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                     |                                                |         |         |                               |                               |
	| pause   | -p                                             | default-k8s-different-port-20220207194800-6868 | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:55:14 UTC | Mon, 07 Feb 2022 19:55:15 UTC |
	|         | default-k8s-different-port-20220207194800-6868 |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                         |                                                |         |         |                               |                               |
	| unpause | -p                                             | default-k8s-different-port-20220207194800-6868 | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:55:16 UTC | Mon, 07 Feb 2022 19:55:16 UTC |
	|         | default-k8s-different-port-20220207194800-6868 |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                         |                                                |         |         |                               |                               |
	| delete  | -p                                             | default-k8s-different-port-20220207194800-6868 | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:55:18 UTC | Mon, 07 Feb 2022 19:55:21 UTC |
	|         | default-k8s-different-port-20220207194800-6868 |                                                |         |         |                               |                               |
	| delete  | -p                                             | default-k8s-different-port-20220207194800-6868 | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:55:21 UTC | Mon, 07 Feb 2022 19:55:21 UTC |
	|         | default-k8s-different-port-20220207194800-6868 |                                                |         |         |                               |                               |
	| start   | -p false-20220207194241-6868                   | false-20220207194241-6868                      | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:54:44 UTC | Mon, 07 Feb 2022 19:55:35 UTC |
	|         | --memory=2048                                  |                                                |         |         |                               |                               |
	|         | --alsologtostderr                              |                                                |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                  |                                                |         |         |                               |                               |
	|         | --cni=false --driver=docker                    |                                                |         |         |                               |                               |
	|         | --container-runtime=docker                     |                                                |         |         |                               |                               |
	| ssh     | -p false-20220207194241-6868                   | false-20220207194241-6868                      | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:55:35 UTC | Mon, 07 Feb 2022 19:55:36 UTC |
	|         | pgrep -a kubelet                               |                                                |         |         |                               |                               |
	| delete  | -p false-20220207194241-6868                   | false-20220207194241-6868                      | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:55:54 UTC | Mon, 07 Feb 2022 19:55:56 UTC |
	| start   | -p cilium-20220207194241-6868                  | cilium-20220207194241-6868                     | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:54:44 UTC | Mon, 07 Feb 2022 19:56:16 UTC |
	|         | --memory=2048                                  |                                                |         |         |                               |                               |
	|         | --alsologtostderr                              |                                                |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                  |                                                |         |         |                               |                               |
	|         | --cni=cilium --driver=docker                   |                                                |         |         |                               |                               |
	|         | --container-runtime=docker                     |                                                |         |         |                               |                               |
	| ssh     | -p cilium-20220207194241-6868                  | cilium-20220207194241-6868                     | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:56:21 UTC | Mon, 07 Feb 2022 19:56:21 UTC |
	|         | pgrep -a kubelet                               |                                                |         |         |                               |                               |
	| start   | -p                                             | old-k8s-version-20220207194436-6868            | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:51:16 UTC | Mon, 07 Feb 2022 19:56:31 UTC |
	|         | old-k8s-version-20220207194436-6868            |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                |                                                |         |         |                               |                               |
	|         | --wait=true --kvm-network=default              |                                                |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                  |                                                |         |         |                               |                               |
	|         | --disable-driver-mounts                        |                                                |         |         |                               |                               |
	|         | --keep-context=false                           |                                                |         |         |                               |                               |
	|         | --driver=docker                                |                                                |         |         |                               |                               |
	|         | --container-runtime=docker                     |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.16.0                   |                                                |         |         |                               |                               |
	| delete  | -p cilium-20220207194241-6868                  | cilium-20220207194241-6868                     | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:56:34 UTC | Mon, 07 Feb 2022 19:56:37 UTC |
	| ssh     | -p                                             | old-k8s-version-20220207194436-6868            | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:56:45 UTC | Mon, 07 Feb 2022 19:56:45 UTC |
	|         | old-k8s-version-20220207194436-6868            |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                     |                                                |         |         |                               |                               |
	|---------|------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/02/07 19:56:37
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.17.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0207 19:56:37.737056  319486 out.go:297] Setting OutFile to fd 1 ...
	I0207 19:56:37.737139  319486 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:56:37.737150  319486 out.go:310] Setting ErrFile to fd 2...
	I0207 19:56:37.737154  319486 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:56:37.737264  319486 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/bin
	I0207 19:56:37.737528  319486 out.go:304] Setting JSON to false
	I0207 19:56:37.739548  319486 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":5954,"bootTime":1644257844,"procs":880,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0207 19:56:37.739637  319486 start.go:122] virtualization: kvm guest
	I0207 19:56:37.742442  319486 out.go:176] * [enable-default-cni-20220207194241-6868] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	I0207 19:56:37.744162  319486 out.go:176]   - MINIKUBE_LOCATION=13439
	I0207 19:56:37.742687  319486 notify.go:174] Checking for updates...
	I0207 19:56:37.745678  319486 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0207 19:56:37.747294  319486 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/kubeconfig
	I0207 19:56:37.748727  319486 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube
	I0207 19:56:37.750084  319486 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0207 19:56:37.750601  319486 config.go:176] Loaded profile config "calico-20220207194241-6868": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 19:56:37.750698  319486 config.go:176] Loaded profile config "custom-weave-20220207194241-6868": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 19:56:37.750810  319486 config.go:176] Loaded profile config "old-k8s-version-20220207194436-6868": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0207 19:56:37.750856  319486 driver.go:344] Setting default libvirt URI to qemu:///system
	I0207 19:56:37.798064  319486 docker.go:132] docker version: linux-20.10.12
	I0207 19:56:37.798181  319486 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 19:56:37.900330  319486 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:48 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-07 19:56:37.830466821 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663643648 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0207 19:56:37.900483  319486 docker.go:237] overlay module found
	I0207 19:56:37.902934  319486 out.go:176] * Using the docker driver based on user configuration
	I0207 19:56:37.902966  319486 start.go:281] selected driver: docker
	I0207 19:56:37.902975  319486 start.go:798] validating driver "docker" against <nil>
	I0207 19:56:37.902995  319486 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0207 19:56:37.903058  319486 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0207 19:56:37.903078  319486 out.go:241] ! Your cgroup does not allow setting memory.
	I0207 19:56:37.904679  319486 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0207 19:56:37.905383  319486 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 19:56:38.036879  319486 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:48 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-07 19:56:37.947207486 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663643648 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0207 19:56:38.037058  319486 start_flags.go:288] no existing cluster config was found, will generate one from the flags 
	I0207 19:56:38.037256  319486 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	E0207 19:56:38.037274  319486 start_flags.go:440] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0207 19:56:38.037293  319486 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0207 19:56:38.037313  319486 cni.go:93] Creating CNI manager for "bridge"
	I0207 19:56:38.037319  319486 start_flags.go:297] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0207 19:56:38.037333  319486 start_flags.go:302] config:
	{Name:enable-default-cni-20220207194241-6868 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:enable-default-cni-20220207194241-6868 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0207 19:56:38.040483  319486 out.go:176] * Starting control plane node enable-default-cni-20220207194241-6868 in cluster enable-default-cni-20220207194241-6868
	I0207 19:56:38.040548  319486 cache.go:120] Beginning downloading kic base image for docker with docker
	I0207 19:56:38.042219  319486 out.go:176] * Pulling base image ...
	I0207 19:56:38.042258  319486 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 19:56:38.042308  319486 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4
	I0207 19:56:38.042326  319486 cache.go:57] Caching tarball of preloaded images
	I0207 19:56:38.042491  319486 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon
	I0207 19:56:38.042683  319486 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0207 19:56:38.042698  319486 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.3 on docker
	I0207 19:56:38.042891  319486 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/enable-default-cni-20220207194241-6868/config.json ...
	I0207 19:56:38.042924  319486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/enable-default-cni-20220207194241-6868/config.json: {Name:mk898de46ea9ec877fa4c95af930d7a822852910 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:56:38.103395  319486 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon, skipping pull
	I0207 19:56:38.103435  319486 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 exists in daemon, skipping load
	I0207 19:56:38.103449  319486 cache.go:208] Successfully downloaded all kic artifacts
	I0207 19:56:38.103492  319486 start.go:313] acquiring machines lock for enable-default-cni-20220207194241-6868: {Name:mk73709fb6735ddb764f546b9a13e11a3c431366 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0207 19:56:38.103647  319486 start.go:317] acquired machines lock for "enable-default-cni-20220207194241-6868" in 125.934µs
	I0207 19:56:38.103685  319486 start.go:89] Provisioning new machine with config: &{Name:enable-default-cni-20220207194241-6868 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:enable-default-cni-20220207194241-6868 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0207 19:56:38.103804  319486 start.go:126] createHost starting for "" (driver="docker")
	I0207 19:56:38.267318  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:40.829469  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:37.293065  305415 pod_ready.go:102] pod "coredns-64897985d-xtzsf" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:39.296557  305415 pod_ready.go:102] pod "coredns-64897985d-xtzsf" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:41.289924  305415 pod_ready.go:97] error getting pod "coredns-64897985d-xtzsf" in "kube-system" namespace (skipping!): pods "coredns-64897985d-xtzsf" not found
	I0207 19:56:41.289961  305415 pod_ready.go:81] duration metric: took 6.024595526s waiting for pod "coredns-64897985d-xtzsf" in "kube-system" namespace to be "Ready" ...
	E0207 19:56:41.289973  305415 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-64897985d-xtzsf" in "kube-system" namespace (skipping!): pods "coredns-64897985d-xtzsf" not found
	I0207 19:56:41.289981  305415 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-xz5c6" in "kube-system" namespace to be "Ready" ...
	I0207 19:56:38.108099  319486 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0207 19:56:38.108410  319486 start.go:160] libmachine.API.Create for "enable-default-cni-20220207194241-6868" (driver="docker")
	I0207 19:56:38.108452  319486 client.go:168] LocalClient.Create starting
	I0207 19:56:38.108572  319486 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca.pem
	I0207 19:56:38.108611  319486 main.go:130] libmachine: Decoding PEM data...
	I0207 19:56:38.108635  319486 main.go:130] libmachine: Parsing certificate...
	I0207 19:56:38.108727  319486 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/cert.pem
	I0207 19:56:38.108758  319486 main.go:130] libmachine: Decoding PEM data...
	I0207 19:56:38.108779  319486 main.go:130] libmachine: Parsing certificate...
	I0207 19:56:38.109256  319486 cli_runner.go:133] Run: docker network inspect enable-default-cni-20220207194241-6868 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0207 19:56:38.158001  319486 cli_runner.go:180] docker network inspect enable-default-cni-20220207194241-6868 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0207 19:56:38.158096  319486 network_create.go:254] running [docker network inspect enable-default-cni-20220207194241-6868] to gather additional debugging logs...
	I0207 19:56:38.158125  319486 cli_runner.go:133] Run: docker network inspect enable-default-cni-20220207194241-6868
	W0207 19:56:38.198863  319486 cli_runner.go:180] docker network inspect enable-default-cni-20220207194241-6868 returned with exit code 1
	I0207 19:56:38.198908  319486 network_create.go:257] error running [docker network inspect enable-default-cni-20220207194241-6868]: docker network inspect enable-default-cni-20220207194241-6868: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: enable-default-cni-20220207194241-6868
	I0207 19:56:38.198935  319486 network_create.go:259] output of [docker network inspect enable-default-cni-20220207194241-6868]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: enable-default-cni-20220207194241-6868
	
	** /stderr **
	I0207 19:56:38.198994  319486 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0207 19:56:38.250262  319486 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-64d1bcee4c72 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:13:39:0e:a7}}
	I0207 19:56:38.251501  319486 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc000114190] misses:0}
	I0207 19:56:38.251553  319486 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 19:56:38.251574  319486 network_create.go:106] attempt to create docker network enable-default-cni-20220207194241-6868 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0207 19:56:38.251630  319486 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220207194241-6868
	I0207 19:56:38.363334  319486 network_create.go:90] docker network enable-default-cni-20220207194241-6868 192.168.58.0/24 created
	I0207 19:56:38.363383  319486 kic.go:106] calculated static IP "192.168.58.2" for the "enable-default-cni-20220207194241-6868" container
	I0207 19:56:38.363468  319486 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0207 19:56:38.417380  319486 cli_runner.go:133] Run: docker volume create enable-default-cni-20220207194241-6868 --label name.minikube.sigs.k8s.io=enable-default-cni-20220207194241-6868 --label created_by.minikube.sigs.k8s.io=true
	I0207 19:56:38.469432  319486 oci.go:102] Successfully created a docker volume enable-default-cni-20220207194241-6868
	I0207 19:56:38.469534  319486 cli_runner.go:133] Run: docker run --rm --name enable-default-cni-20220207194241-6868-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20220207194241-6868 --entrypoint /usr/bin/test -v enable-default-cni-20220207194241-6868:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -d /var/lib
	I0207 19:56:39.247660  319486 oci.go:106] Successfully prepared a docker volume enable-default-cni-20220207194241-6868
	I0207 19:56:39.247735  319486 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 19:56:39.247760  319486 kic.go:179] Starting extracting preloaded images to volume ...
	I0207 19:56:39.247836  319486 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-20220207194241-6868:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0207 19:56:43.263668  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:45.264086  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:43.301097  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:45.301464  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:45.421672  319486 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-20220207194241-6868:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir: (6.173774565s)
	I0207 19:56:45.421716  319486 kic.go:188] duration metric: took 6.173953 seconds to extract preloaded images to volume
	W0207 19:56:45.421773  319486 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0207 19:56:45.421788  319486 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0207 19:56:45.421860  319486 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0207 19:56:45.548198  319486 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-20220207194241-6868 --name enable-default-cni-20220207194241-6868 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20220207194241-6868 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-20220207194241-6868 --network enable-default-cni-20220207194241-6868 --ip 192.168.58.2 --volume enable-default-cni-20220207194241-6868:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8
	I0207 19:56:46.122462  319486 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207194241-6868 --format={{.State.Running}}
	I0207 19:56:46.173476  319486 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207194241-6868 --format={{.State.Status}}
	I0207 19:56:46.220551  319486 cli_runner.go:133] Run: docker exec enable-default-cni-20220207194241-6868 stat /var/lib/dpkg/alternatives/iptables
	I0207 19:56:46.309307  319486 oci.go:281] the created container "enable-default-cni-20220207194241-6868" has a running status.
	I0207 19:56:46.309344  319486 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/enable-default-cni-20220207194241-6868/id_rsa...
	I0207 19:56:46.566975  319486 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/enable-default-cni-20220207194241-6868/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0207 19:56:46.667459  319486 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207194241-6868 --format={{.State.Status}}
	I0207 19:56:46.721552  319486 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0207 19:56:46.721584  319486 kic_runner.go:114] Args: [docker exec --privileged enable-default-cni-20220207194241-6868 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0207 19:56:46.838052  319486 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207194241-6868 --format={{.State.Status}}
	I0207 19:56:46.890734  319486 machine.go:88] provisioning docker machine ...
	I0207 19:56:46.890797  319486 ubuntu.go:169] provisioning hostname "enable-default-cni-20220207194241-6868"
	I0207 19:56:46.890869  319486 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220207194241-6868
	I0207 19:56:46.930992  319486 main.go:130] libmachine: Using SSH client type: native
	I0207 19:56:46.931242  319486 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil>  [] 0s} 127.0.0.1 49429 <nil> <nil>}
	I0207 19:56:46.931267  319486 main.go:130] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-20220207194241-6868 && echo "enable-default-cni-20220207194241-6868" | sudo tee /etc/hostname
	I0207 19:56:47.078700  319486 main.go:130] libmachine: SSH cmd err, output: <nil>: enable-default-cni-20220207194241-6868
	
	I0207 19:56:47.078795  319486 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220207194241-6868
	I0207 19:56:47.117628  319486 main.go:130] libmachine: Using SSH client type: native
	I0207 19:56:47.117802  319486 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil>  [] 0s} 127.0.0.1 49429 <nil> <nil>}
	I0207 19:56:47.117823  319486 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-20220207194241-6868' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-20220207194241-6868/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-20220207194241-6868' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0207 19:56:47.243344  319486 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0207 19:56:47.243382  319486 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube}
	I0207 19:56:47.243413  319486 ubuntu.go:177] setting up certificates
	I0207 19:56:47.243425  319486 provision.go:83] configureAuth start
	I0207 19:56:47.243485  319486 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-20220207194241-6868
	I0207 19:56:47.289604  319486 provision.go:138] copyHostCerts
	I0207 19:56:47.289687  319486 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/ca.pem, removing ...
	I0207 19:56:47.289727  319486 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/ca.pem
	I0207 19:56:47.289816  319486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/ca.pem (1078 bytes)
	I0207 19:56:47.289925  319486 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cert.pem, removing ...
	I0207 19:56:47.289953  319486 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cert.pem
	I0207 19:56:47.289990  319486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cert.pem (1123 bytes)
	I0207 19:56:47.290063  319486 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/key.pem, removing ...
	I0207 19:56:47.290078  319486 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/key.pem
	I0207 19:56:47.290110  319486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/key.pem (1675 bytes)
	I0207 19:56:47.290176  319486 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-20220207194241-6868 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube enable-default-cni-20220207194241-6868]
	I0207 19:56:47.637071  319486 provision.go:172] copyRemoteCerts
	I0207 19:56:47.637162  319486 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0207 19:56:47.637217  319486 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220207194241-6868
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-02-07 19:54:29 UTC, end at Mon 2022-02-07 19:56:48 UTC. --
	Feb 07 19:54:31 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:54:31.883815565Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 07 19:54:31 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:54:31.919791127Z" level=info msg="Loading containers: done."
	Feb 07 19:54:31 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:54:31.932060851Z" level=info msg="Docker daemon" commit=459d0df graphdriver(s)=overlay2 version=20.10.12
	Feb 07 19:54:31 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:54:31.932147988Z" level=info msg="Daemon has completed initialization"
	Feb 07 19:54:31 old-k8s-version-20220207194436-6868 systemd[1]: Started Docker Application Container Engine.
	Feb 07 19:54:31 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:54:31.951551504Z" level=info msg="API listen on [::]:2376"
	Feb 07 19:54:31 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:54:31.955652311Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 07 19:55:15 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:55:15.462862785Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Feb 07 19:55:15 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:55:15.462918362Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Feb 07 19:55:15 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:55:15.465143198Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Feb 07 19:55:16 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:55:16.699777424Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Feb 07 19:55:16 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:55:16.945032971Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Feb 07 19:55:21 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:55:21.129775798Z" level=info msg="ignoring event" container=869dd1f73299bd0b45c6e89adc58d79abb29a321277df645d66d2e184062c43d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 19:55:22 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:55:22.044596540Z" level=info msg="ignoring event" container=a164e1d4889c88d6dcdbc10a152566832564e1a050f3176e48b4d4569fcf3b31 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 19:55:30 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:55:30.231442199Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Feb 07 19:55:30 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:55:30.231517839Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Feb 07 19:55:30 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:55:30.401394519Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Feb 07 19:55:37 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:55:37.249455874Z" level=info msg="ignoring event" container=723a3448d5f37a421ceeb38e8ec7d159b03b3029bc779f5c349887a001f97292 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 19:55:53 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:55:53.074290514Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Feb 07 19:55:53 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:55:53.074394711Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Feb 07 19:55:53 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:55:53.076385609Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Feb 07 19:56:04 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:56:04.331182933Z" level=info msg="ignoring event" container=2f193b948eab195589a13ce368f65044625c7c774630e0b95bc29ce5b042e91f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 19:56:46 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:56:46.079569927Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Feb 07 19:56:46 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:56:46.079625765Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Feb 07 19:56:46 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:56:46.081883337Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID
	2f193b948eab1       a90209bb39e3d       49 seconds ago       Exited              dashboard-metrics-scraper   3                   88de8803491b2
	3326546c62eb1       e1482a24335a6       About a minute ago   Running             kubernetes-dashboard        0                   d399a17c6ac99
	886d7532e9019       6e38f40d628db       About a minute ago   Running             storage-provisioner         0                   d3d718b05d1b9
	3cad7072d4135       bf261d1579144       About a minute ago   Running             coredns                     0                   54c673a142aee
	e6652b92a15b3       c21b0c7400f98       About a minute ago   Running             kube-proxy                  0                   87783008c830b
	c08b0cd64d6db       301ddc62b80b1       2 minutes ago        Running             kube-scheduler              0                   0e1fc7447b1db
	633fbb04fe35e       b2756210eeabf       2 minutes ago        Running             etcd                        0                   3f1ac95625308
	7e5c060473dfc       06a629a7e51cd       2 minutes ago        Running             kube-controller-manager     0                   ca9d427ad948c
	118b31137a1fe       b305571ca60a5       2 minutes ago        Running             kube-apiserver              0                   8174bcd3e8f59
	
	* 
	* ==> coredns [3cad7072d413] <==
	* .:53
	2022-02-07T19:55:14.964Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2022-02-07T19:55:14.964Z [INFO] CoreDNS-1.6.2
	2022-02-07T19:55:14.964Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2022-02-07T19:55:50.575Z [INFO] plugin/reload: Running configuration MD5 = bca3ea372abcb69ab498841fb7d6d24e
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20220207194436-6868
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20220207194436-6868
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=68b41900649d825bc98a620f335c8941b16741bb
	                    minikube.k8s.io/name=old-k8s-version-20220207194436-6868
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_02_07T19_54_57_0700
	                    minikube.k8s.io/version=v1.25.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Feb 2022 19:54:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Feb 2022 19:56:46 +0000   Mon, 07 Feb 2022 19:54:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Feb 2022 19:56:46 +0000   Mon, 07 Feb 2022 19:54:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Feb 2022 19:56:46 +0000   Mon, 07 Feb 2022 19:54:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Feb 2022 19:56:46 +0000   Mon, 07 Feb 2022 19:54:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-20220207194436-6868
	Capacity:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32874652Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32874652Ki
	 pods:               110
	System Info:
	 Machine ID:                 f0d9fc3b84d34ab4ba684459888f0938
	 System UUID:                083453d1-7edc-492c-81f2-6224d6bd0799
	 Boot ID:                    1510a09b-8b2d-457e-ae3f-04f8be50f6e3
	 Kernel Version:             5.11.0-1029-gcp
	 OS Image:                   Ubuntu 20.04.2 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  docker://20.10.12
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (10 in total)
	  Namespace                  Name                                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-wghxc                                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     98s
	  kube-system                etcd-old-k8s-version-20220207194436-6868                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  kube-system                kube-apiserver-old-k8s-version-20220207194436-6868             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  kube-system                kube-controller-manager-old-k8s-version-20220207194436-6868    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  kube-system                kube-proxy-vxvst                                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                kube-scheduler-old-k8s-version-20220207194436-6868             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kube-system                metrics-server-5b7b789f-l4rz7                                  100m (1%!)(MISSING)     0 (0%!)(MISSING)      300Mi (0%!)(MISSING)       0 (0%!)(MISSING)         95s
	  kube-system                storage-provisioner                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kubernetes-dashboard       dashboard-metrics-scraper-6b84985989-9lc6t                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kubernetes-dashboard       kubernetes-dashboard-766959b846-zlhv4                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (1%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From                                             Message
	  ----    ------                   ----                   ----                                             -------
	  Normal  NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet, old-k8s-version-20220207194436-6868     Node old-k8s-version-20220207194436-6868 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet, old-k8s-version-20220207194436-6868     Node old-k8s-version-20220207194436-6868 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x7 over 2m10s)  kubelet, old-k8s-version-20220207194436-6868     Node old-k8s-version-20220207194436-6868 status is now: NodeHasSufficientPID
	  Normal  Starting                 96s                    kube-proxy, old-k8s-version-20220207194436-6868  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa 99 c8 1f b0 26 08 06
	[  +0.000010] IPv4: martian source 10.85.0.3 from 10.85.0.3, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff fa 99 c8 1f b0 26 08 06
	[  +0.215273] IPv4: martian source 10.85.0.4 from 10.85.0.4, on dev eth0
	[  +0.000024] ll header: 00000000: ff ff ff ff ff ff 7e 65 29 2a 5f e9 08 06
	[  +0.165223] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9e 1f e9 75 c1 44 08 06
	[  +3.522174] IPv4: martian source 10.85.0.5 from 10.85.0.5, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff be b3 aa 34 8f 8a 08 06
	[  +0.453228] IPv4: martian source 10.85.0.6 from 10.85.0.6, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 a3 ae a8 0a ae 08 06
	[  +1.670853] IPv4: martian source 10.85.0.7 from 10.85.0.7, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4a f4 83 b2 6d 79 08 06
	[  +1.817833] IPv4: martian source 10.85.0.8 from 10.85.0.8, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 17 75 26 96 41 08 06
	[  +0.967120] IPv4: martian source 10.85.0.9 from 10.85.0.9, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 86 cb e9 73 9b 24 08 06
	[ +19.232672] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 ce 8b d9 25 4a 08 06
	[  +0.000011] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff e2 ce 8b d9 25 4a 08 06
	[  +0.939128] IPv4: martian source 10.85.0.3 from 10.85.0.3, on dev eth0
	[  +0.000024] ll header: 00000000: ff ff ff ff ff ff 52 41 c9 46 b4 15 08 06
	[  +8.463247] IPv4: martian source 10.85.0.4 from 10.85.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 46 2b ce bb c7 90 08 06
	
	* 
	* ==> etcd [633fbb04fe35] <==
	* 2022-02-07 19:54:55.314724 W | etcdserver: read-only range request "key:\"/registry/events/default/old-k8s-version-20220207194436-6868.16d1991ac3602739\" " with result "range_response_count:1 size:535" took too long (295.592843ms) to execute
	2022-02-07 19:54:55.314787 W | etcdserver: request "header:<ID:15638326212616165164 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:coredns\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:coredns\" value_size:206 >> failure:<>>" with result "size:16" took too long (239.487938ms) to execute
	2022-02-07 19:54:55.314887 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/bootstrap-signer\" " with result "range_response_count:0 size:5" took too long (293.342153ms) to execute
	2022-02-07 19:54:55.628400 W | etcdserver: request "header:<ID:15638326212616165168 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterrolebindings/system:coredns\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/system:coredns\" value_size:241 >> failure:<>>" with result "size:16" took too long (204.569204ms) to execute
	2022-02-07 19:54:55.628518 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:178" took too long (309.909407ms) to execute
	2022-02-07 19:54:55.955467 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/coredns\" " with result "range_response_count:1 size:181" took too long (228.667113ms) to execute
	2022-02-07 19:54:55.955507 W | etcdserver: request "header:<ID:15638326212616165177 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-old-k8s-version-20220207194436-6868.16d1991afe90631b\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-old-k8s-version-20220207194436-6868.16d1991afe90631b\" value_size:442 lease:6414954175761389303 >> failure:<>>" with result "size:16" took too long (218.600182ms) to execute
	2022-02-07 19:54:56.188158 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:178" took too long (225.552258ms) to execute
	2022-02-07 19:54:56.222152 W | etcdserver: read-only range request "key:\"/registry/ranges/serviceips\" " with result "range_response_count:1 size:117" took too long (255.119529ms) to execute
	2022-02-07 19:54:56.605440 W | etcdserver: request "header:<ID:15638326212616165191 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/ranges/serviceips\" mod_revision:148 > success:<request_put:<key:\"/registry/ranges/serviceips\" value_size:71 >> failure:<request_range:<key:\"/registry/ranges/serviceips\" > >>" with result "size:16" took too long (291.024777ms) to execute
	2022-02-07 19:54:56.605663 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller\" " with result "range_response_count:1 size:236" took too long (380.347763ms) to execute
	2022-02-07 19:54:56.845451 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/kube-proxy\" " with result "range_response_count:1 size:187" took too long (174.364595ms) to execute
	2022-02-07 19:54:56.932885 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/resourcequota-controller\" " with result "range_response_count:0 size:5" took too long (258.090696ms) to execute
	2022-02-07 19:54:57.628805 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/statefulset-controller\" " with result "range_response_count:1 size:212" took too long (125.601489ms) to execute
	2022-02-07 19:54:57.887340 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/certificate-controller\" " with result "range_response_count:1 size:212" took too long (173.479785ms) to execute
	2022-02-07 19:55:30.204955 W | etcdserver: request "header:<ID:15638326212616165855 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-5b7b789f-l4rz7\" mod_revision:469 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-5b7b789f-l4rz7\" value_size:1769 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-5b7b789f-l4rz7\" > >>" with result "size:16" took too long (108.93138ms) to execute
	2022-02-07 19:55:30.594424 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/metrics-server-5b7b789f-l4rz7.16d19923356b2380\" " with result "range_response_count:1 size:501" took too long (189.478871ms) to execute
	2022-02-07 19:55:59.244667 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-old-k8s-version-20220207194436-6868\" " with result "range_response_count:1 size:1926" took too long (190.778076ms) to execute
	2022-02-07 19:55:59.542546 W | etcdserver: request "header:<ID:15638326212616165995 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/etcd-old-k8s-version-20220207194436-6868\" mod_revision:551 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-old-k8s-version-20220207194436-6868\" value_size:2273 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-old-k8s-version-20220207194436-6868\" > >>" with result "size:16" took too long (194.761455ms) to execute
	2022-02-07 19:55:59.542762 W | etcdserver: read-only range request "key:\"/registry/certificatesigningrequests\" range_end:\"/registry/certificatesigningrequestt\" count_only:true " with result "range_response_count:0 size:7" took too long (199.743782ms) to execute
	2022-02-07 19:55:59.939198 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses\" range_end:\"/registry/runtimeclasset\" count_only:true " with result "range_response_count:0 size:5" took too long (150.233121ms) to execute
	2022-02-07 19:55:59.939357 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (278.995783ms) to execute
	2022-02-07 19:56:41.847511 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings\" range_end:\"/registry/clusterrolebindingt\" count_only:true " with result "range_response_count:0 size:7" took too long (121.706812ms) to execute
	2022-02-07 19:56:41.847656 W | etcdserver: read-only range request "key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" " with result "range_response_count:2 size:4055" took too long (102.749872ms) to execute
	2022-02-07 19:56:41.847751 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:215" took too long (121.508166ms) to execute
	
	* 
	* ==> kernel <==
	*  19:56:49 up  1:39,  0 users,  load average: 8.80, 5.59, 3.61
	Linux old-k8s-version-20220207194436-6868 5.11.0-1029-gcp #33~20.04.3-Ubuntu SMP Tue Jan 18 12:03:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [118b31137a1f] <==
	* I0207 19:54:53.996248       1 trace.go:116] Trace[298250682]: "GuaranteedUpdate etcd3" type:*core.Event (started: 2022-02-07 19:54:53.477308684 +0000 UTC m=+13.108100383) (total time: 518.897725ms):
	Trace[298250682]: [518.842783ms] [429.037573ms] Transaction committed
	I0207 19:54:53.996561       1 trace.go:116] Trace[981877381]: "Patch" url:/api/v1/namespaces/default/events/old-k8s-version-20220207194436-6868.16d1991ac3602739 (started: 2022-02-07 19:54:53.477217291 +0000 UTC m=+13.108009001) (total time: 519.301969ms):
	Trace[981877381]: [89.491101ms] [89.453296ms] About to apply patch
	Trace[981877381]: [519.208192ms] [429.550827ms] Object stored in database
	I0207 19:54:55.739567       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0207 19:54:56.606448       1 trace.go:116] Trace[1673789357]: "GuaranteedUpdate etcd3" type:*core.RangeAllocation (started: 2022-02-07 19:54:55.966683762 +0000 UTC m=+15.597475442) (total time: 639.722259ms):
	Trace[1673789357]: [257.654995ms] [257.654995ms] initial value restored
	Trace[1673789357]: [639.701033ms] [382.024779ms] Transaction committed
	I0207 19:54:56.608458       1 trace.go:116] Trace[291133068]: "Create" url:/api/v1/namespaces/kube-system/services (started: 2022-02-07 19:54:55.965578053 +0000 UTC m=+15.596369753) (total time: 642.85178ms):
	Trace[291133068]: [642.501827ms] [641.51465ms] Object stored in database
	I0207 19:54:56.861130       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0207 19:55:11.087151       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0207 19:55:11.108057       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I0207 19:55:11.605839       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0207 19:55:16.099497       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0207 19:55:16.099598       1 handler_proxy.go:99] no RequestInfo found in the context
	E0207 19:55:16.099692       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0207 19:55:16.099711       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0207 19:56:16.100021       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0207 19:56:16.100130       1 handler_proxy.go:99] no RequestInfo found in the context
	E0207 19:56:16.100187       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0207 19:56:16.100204       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [7e5c060473df] <==
	* I0207 19:55:14.260341       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-766959b846", UID:"d5a31d76-27f9-4c0c-9ed4-a497c02385e9", APIVersion:"apps/v1", ResourceVersion:"403", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0207 19:55:14.264918       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-6b84985989" failed with pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0207 19:55:14.265284       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"52893cf8-18a2-45ec-8652-4fec0584cbaf", APIVersion:"apps/v1", ResourceVersion:"402", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0207 19:55:14.289196       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-766959b846" failed with pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0207 19:55:14.297870       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-6b84985989" failed with pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0207 19:55:14.298229       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"52893cf8-18a2-45ec-8652-4fec0584cbaf", APIVersion:"apps/v1", ResourceVersion:"402", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0207 19:55:14.352675       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-6b84985989" failed with pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0207 19:55:14.353041       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-766959b846" failed with pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0207 19:55:14.353064       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"52893cf8-18a2-45ec-8652-4fec0584cbaf", APIVersion:"apps/v1", ResourceVersion:"402", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0207 19:55:14.353089       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-766959b846", UID:"d5a31d76-27f9-4c0c-9ed4-a497c02385e9", APIVersion:"apps/v1", ResourceVersion:"409", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0207 19:55:14.370000       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-766959b846" failed with pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0207 19:55:14.370510       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-766959b846", UID:"d5a31d76-27f9-4c0c-9ed4-a497c02385e9", APIVersion:"apps/v1", ResourceVersion:"409", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0207 19:55:14.370709       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-6b84985989" failed with pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0207 19:55:14.370790       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"52893cf8-18a2-45ec-8652-4fec0584cbaf", APIVersion:"apps/v1", ResourceVersion:"402", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0207 19:55:14.375861       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-766959b846", UID:"d5a31d76-27f9-4c0c-9ed4-a497c02385e9", APIVersion:"apps/v1", ResourceVersion:"409", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0207 19:55:14.375965       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-766959b846" failed with pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0207 19:55:14.761954       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-5b7b789f", UID:"c06832a5-1156-4b67-94c0-e4b01dfc2a71", APIVersion:"apps/v1", ResourceVersion:"364", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-5b7b789f-l4rz7
	I0207 19:55:15.472656       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-766959b846", UID:"d5a31d76-27f9-4c0c-9ed4-a497c02385e9", APIVersion:"apps/v1", ResourceVersion:"409", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-766959b846-zlhv4
	I0207 19:55:15.472718       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"52893cf8-18a2-45ec-8652-4fec0584cbaf", APIVersion:"apps/v1", ResourceVersion:"402", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-6b84985989-9lc6t
	E0207 19:55:41.862230       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0207 19:55:43.612052       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0207 19:56:12.115509       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0207 19:56:15.613878       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0207 19:56:42.367580       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0207 19:56:47.615906       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [e6652b92a15b] <==
	* W0207 19:55:13.745693       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0207 19:55:13.785265       1 node.go:135] Successfully retrieved node IP: 192.168.76.2
	I0207 19:55:13.785319       1 server_others.go:149] Using iptables Proxier.
	I0207 19:55:13.795019       1 server.go:529] Version: v1.16.0
	I0207 19:55:13.796679       1 config.go:313] Starting service config controller
	I0207 19:55:13.806688       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0207 19:55:13.804720       1 config.go:131] Starting endpoints config controller
	I0207 19:55:13.807328       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0207 19:55:13.937788       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0207 19:55:13.937869       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [c08b0cd64d6d] <==
	* E0207 19:54:48.570533       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0207 19:54:48.571666       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0207 19:54:48.572721       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0207 19:54:49.557144       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0207 19:54:49.565004       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0207 19:54:49.565618       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0207 19:54:49.566722       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0207 19:54:49.567745       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0207 19:54:49.568790       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0207 19:54:49.569788       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0207 19:54:49.570958       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0207 19:54:49.572058       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0207 19:54:49.573213       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0207 19:54:49.574453       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0207 19:54:50.558557       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0207 19:54:50.566449       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0207 19:54:50.567248       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0207 19:54:50.568285       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0207 19:54:50.569241       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0207 19:54:50.570447       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0207 19:54:50.571305       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0207 19:54:50.572558       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0207 19:54:50.573630       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0207 19:54:50.574795       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0207 19:54:50.575777       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-02-07 19:54:29 UTC, end at Mon 2022-02-07 19:56:49 UTC. --
	Feb 07 19:55:30 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:55:30.402165    1272 pod_workers.go:191] Error syncing pod 38641e20-0ebe-4eaa-ba94-68e627b0e83b ("metrics-server-5b7b789f-l4rz7_kube-system(38641e20-0ebe-4eaa-ba94-68e627b0e83b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Feb 07 19:55:37 old-k8s-version-20220207194436-6868 kubelet[1272]: W0207 19:55:37.996040    1272 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-9lc6t through plugin: invalid network status for
	Feb 07 19:55:38 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:55:38.001951    1272 pod_workers.go:191] Error syncing pod f64f2170-dbf3-4719-b7d6-261186e5b287 ("dashboard-metrics-scraper-6b84985989-9lc6t_kubernetes-dashboard(f64f2170-dbf3-4719-b7d6-261186e5b287)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-9lc6t_kubernetes-dashboard(f64f2170-dbf3-4719-b7d6-261186e5b287)"
	Feb 07 19:55:39 old-k8s-version-20220207194436-6868 kubelet[1272]: W0207 19:55:39.009073    1272 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-9lc6t through plugin: invalid network status for
	Feb 07 19:55:41 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:55:41.048907    1272 pod_workers.go:191] Error syncing pod 38641e20-0ebe-4eaa-ba94-68e627b0e83b ("metrics-server-5b7b789f-l4rz7_kube-system(38641e20-0ebe-4eaa-ba94-68e627b0e83b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Feb 07 19:55:45 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:55:45.393821    1272 pod_workers.go:191] Error syncing pod f64f2170-dbf3-4719-b7d6-261186e5b287 ("dashboard-metrics-scraper-6b84985989-9lc6t_kubernetes-dashboard(f64f2170-dbf3-4719-b7d6-261186e5b287)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-9lc6t_kubernetes-dashboard(f64f2170-dbf3-4719-b7d6-261186e5b287)"
	Feb 07 19:55:53 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:55:53.076916    1272 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Feb 07 19:55:53 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:55:53.076970    1272 kuberuntime_image.go:50] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Feb 07 19:55:53 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:55:53.077030    1272 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Feb 07 19:55:53 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:55:53.077059    1272 pod_workers.go:191] Error syncing pod 38641e20-0ebe-4eaa-ba94-68e627b0e83b ("metrics-server-5b7b789f-l4rz7_kube-system(38641e20-0ebe-4eaa-ba94-68e627b0e83b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Feb 07 19:56:04 old-k8s-version-20220207194436-6868 kubelet[1272]: W0207 19:56:04.188791    1272 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-9lc6t through plugin: invalid network status for
	Feb 07 19:56:05 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:56:05.047787    1272 pod_workers.go:191] Error syncing pod 38641e20-0ebe-4eaa-ba94-68e627b0e83b ("metrics-server-5b7b789f-l4rz7_kube-system(38641e20-0ebe-4eaa-ba94-68e627b0e83b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Feb 07 19:56:05 old-k8s-version-20220207194436-6868 kubelet[1272]: W0207 19:56:05.318841    1272 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-9lc6t through plugin: invalid network status for
	Feb 07 19:56:05 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:56:05.326552    1272 pod_workers.go:191] Error syncing pod f64f2170-dbf3-4719-b7d6-261186e5b287 ("dashboard-metrics-scraper-6b84985989-9lc6t_kubernetes-dashboard(f64f2170-dbf3-4719-b7d6-261186e5b287)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-9lc6t_kubernetes-dashboard(f64f2170-dbf3-4719-b7d6-261186e5b287)"
	Feb 07 19:56:06 old-k8s-version-20220207194436-6868 kubelet[1272]: W0207 19:56:06.333677    1272 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-9lc6t through plugin: invalid network status for
	Feb 07 19:56:06 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:56:06.338559    1272 pod_workers.go:191] Error syncing pod f64f2170-dbf3-4719-b7d6-261186e5b287 ("dashboard-metrics-scraper-6b84985989-9lc6t_kubernetes-dashboard(f64f2170-dbf3-4719-b7d6-261186e5b287)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-9lc6t_kubernetes-dashboard(f64f2170-dbf3-4719-b7d6-261186e5b287)"
	Feb 07 19:56:17 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:56:17.047691    1272 pod_workers.go:191] Error syncing pod 38641e20-0ebe-4eaa-ba94-68e627b0e83b ("metrics-server-5b7b789f-l4rz7_kube-system(38641e20-0ebe-4eaa-ba94-68e627b0e83b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Feb 07 19:56:19 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:56:19.045967    1272 pod_workers.go:191] Error syncing pod f64f2170-dbf3-4719-b7d6-261186e5b287 ("dashboard-metrics-scraper-6b84985989-9lc6t_kubernetes-dashboard(f64f2170-dbf3-4719-b7d6-261186e5b287)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-9lc6t_kubernetes-dashboard(f64f2170-dbf3-4719-b7d6-261186e5b287)"
	Feb 07 19:56:31 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:56:31.046729    1272 pod_workers.go:191] Error syncing pod 38641e20-0ebe-4eaa-ba94-68e627b0e83b ("metrics-server-5b7b789f-l4rz7_kube-system(38641e20-0ebe-4eaa-ba94-68e627b0e83b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Feb 07 19:56:32 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:56:32.045383    1272 pod_workers.go:191] Error syncing pod f64f2170-dbf3-4719-b7d6-261186e5b287 ("dashboard-metrics-scraper-6b84985989-9lc6t_kubernetes-dashboard(f64f2170-dbf3-4719-b7d6-261186e5b287)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-9lc6t_kubernetes-dashboard(f64f2170-dbf3-4719-b7d6-261186e5b287)"
	Feb 07 19:56:43 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:56:43.045362    1272 pod_workers.go:191] Error syncing pod f64f2170-dbf3-4719-b7d6-261186e5b287 ("dashboard-metrics-scraper-6b84985989-9lc6t_kubernetes-dashboard(f64f2170-dbf3-4719-b7d6-261186e5b287)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-9lc6t_kubernetes-dashboard(f64f2170-dbf3-4719-b7d6-261186e5b287)"
	Feb 07 19:56:46 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:56:46.082455    1272 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Feb 07 19:56:46 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:56:46.082504    1272 kuberuntime_image.go:50] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Feb 07 19:56:46 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:56:46.082569    1272 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Feb 07 19:56:46 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:56:46.082600    1272 pod_workers.go:191] Error syncing pod 38641e20-0ebe-4eaa-ba94-68e627b0e83b ("metrics-server-5b7b789f-l4rz7_kube-system(38641e20-0ebe-4eaa-ba94-68e627b0e83b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	
	* 
	* ==> kubernetes-dashboard [3326546c62eb] <==
	* 2022/02/07 19:55:16 Using namespace: kubernetes-dashboard
	2022/02/07 19:55:16 Using in-cluster config to connect to apiserver
	2022/02/07 19:55:16 Using secret token for csrf signing
	2022/02/07 19:55:16 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/02/07 19:55:16 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/02/07 19:55:16 Successful initial request to the apiserver, version: v1.16.0
	2022/02/07 19:55:16 Generating JWE encryption key
	2022/02/07 19:55:16 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/02/07 19:55:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/02/07 19:55:16 Initializing JWE encryption key from synchronized object
	2022/02/07 19:55:16 Creating in-cluster Sidecar client
	2022/02/07 19:55:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/02/07 19:55:16 Serving insecurely on HTTP port: 9090
	2022/02/07 19:55:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/02/07 19:56:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/02/07 19:56:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/02/07 19:55:16 Starting overwatch
	
	* 
	* ==> storage-provisioner [886d7532e901] <==
	* I0207 19:55:14.864090       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0207 19:55:14.942229       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0207 19:55:14.942301       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0207 19:55:14.970919       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0207 19:55:14.971067       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0a398cac-aeae-4661-b698-55cee3afd866", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-20220207194436-6868_7308e4d0-7f2d-42b1-81bc-5de07deb4fbb became leader
	I0207 19:55:14.971320       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20220207194436-6868_7308e4d0-7f2d-42b1-81bc-5de07deb4fbb!
	I0207 19:55:15.073456       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20220207194436-6868_7308e4d0-7f2d-42b1-81bc-5de07deb4fbb!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220207194436-6868 -n old-k8s-version-20220207194436-6868
helpers_test.go:262: (dbg) Run:  kubectl --context old-k8s-version-20220207194436-6868 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-5b7b789f-l4rz7
helpers_test.go:273: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20220207194436-6868 describe pod metrics-server-5b7b789f-l4rz7
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220207194436-6868 describe pod metrics-server-5b7b789f-l4rz7: exit status 1 (95.727732ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5b7b789f-l4rz7" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context old-k8s-version-20220207194436-6868 describe pod metrics-server-5b7b789f-l4rz7: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20220207194436-6868
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20220207194436-6868:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "648fc6fa2728d10dc431b66fd18bf4415121c9acdd55486cda8b6ae43e2f66af",
	        "Created": "2022-02-07T19:54:28.823618047Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 277987,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-02-07T19:54:29.297605934Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:50384aa4ebef3abc81b3b83296147bd747dcd04d4644d8f3150476ffa93e6889",
	        "ResolvConfPath": "/var/lib/docker/containers/648fc6fa2728d10dc431b66fd18bf4415121c9acdd55486cda8b6ae43e2f66af/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/648fc6fa2728d10dc431b66fd18bf4415121c9acdd55486cda8b6ae43e2f66af/hostname",
	        "HostsPath": "/var/lib/docker/containers/648fc6fa2728d10dc431b66fd18bf4415121c9acdd55486cda8b6ae43e2f66af/hosts",
	        "LogPath": "/var/lib/docker/containers/648fc6fa2728d10dc431b66fd18bf4415121c9acdd55486cda8b6ae43e2f66af/648fc6fa2728d10dc431b66fd18bf4415121c9acdd55486cda8b6ae43e2f66af-json.log",
	        "Name": "/old-k8s-version-20220207194436-6868",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-20220207194436-6868:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220207194436-6868",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ecf4fffac5f9789e79a9e9309d410fd3ac70dfa9c7a613169d65c52951e3686e-init/diff:/var/lib/docker/overlay2/40e36e3239cb5157195ce223d31e5e12299d283013c03c510d3e8a2442fd2c92/diff:/var/lib/docker/overlay2/21617b479acf17653e84d6ae3cb822db5c7eac887dbffb288d5171c45b712c0d/diff:/var/lib/docker/overlay2/2dbc01d4f6abd3524aaa75f3f362b44291e07e9adaadba323bd734a77bfa9c6a/diff:/var/lib/docker/overlay2/1c3968298265a3203685852a8c6fa391e12253b485741654087afb7a90fc1d77/diff:/var/lib/docker/overlay2/6a2a8c5d6504d982da53621a1d6f96ee3336c19fd9f294d5b418cc706dc8944c/diff:/var/lib/docker/overlay2/7e7a079457982ab93f984a944ffef8ef6a0aedcf9ae87dd48d2bfaebfa401212/diff:/var/lib/docker/overlay2/fae622e4af16ac53e0d1ab6e7ec0b23cddddaf4c7b9c906b18db9f5a7421f38d/diff:/var/lib/docker/overlay2/d4355831ba7c15624e8cc51f64415d91ec01d79fc16f0d8cce7cf9819963c9be/diff:/var/lib/docker/overlay2/5453a1a1be3960eaab33a3909934d20d3b1f1d0bd01d04e14158548e63d9ccc7/diff:/var/lib/docker/overlay2/b7f7aa
f98954a80aedd0a57753ced767fc40fd261655975f8bb2201f533af508/diff:/var/lib/docker/overlay2/582d45c1dfa23d0fcf227689ca05cc54f60cdf8562c7df098f15c0596f9f3b84/diff:/var/lib/docker/overlay2/97921dc2ea2a25724aa5bc8ee71d705ad02bb5de7327e9125b14e7ed3e0a36d9/diff:/var/lib/docker/overlay2/8994377961c9baa6fdb05a49604c2c1639c56f117040ce16cfcd7068142802d0/diff:/var/lib/docker/overlay2/741d31f19db93cecb47cf3edf12208c50adfa881f267e46fc2200168359e063e/diff:/var/lib/docker/overlay2/be1305b93735b2cb41c1050a14599a08f09c103ef39104313e5c6ea7783a25d0/diff:/var/lib/docker/overlay2/d2c6406a44063188bff06eacfb837bce43d713aa16c08f434607947a2e2aeb2d/diff:/var/lib/docker/overlay2/2354e37c2793df3a7faa18542aa5d3030952a40a0dd4361a9ad132d57efd3dea/diff:/var/lib/docker/overlay2/82b71b4192e75ce019792a62b12c4d48d3352cd8295673aa7b75c929d0c7f4ae/diff:/var/lib/docker/overlay2/6c62b320b27e5a2c13eea8d9b6e430fb56485a76ac7bf171136df923f96334b6/diff:/var/lib/docker/overlay2/f65c213239b185d01f445a11f073325d0aa4a30296ee7125aeec4abc8b80289e/diff:/var/lib/d
ocker/overlay2/f4ab87d7e9bbbf343135421546bd636317abbc0406bd09bc0e7ded9abb5ffe07/diff:/var/lib/docker/overlay2/c962dce8dce172c66b9fae4d0533e0b9eb6f537f99f2ae091522820f3437e87b/diff:/var/lib/docker/overlay2/c5f3b750eb1f675794758011aa1f3cf1afaaea6aeabaacfa7127c4e8eb3e9d3f/diff:/var/lib/docker/overlay2/165d7a930e1764d6612409e5b2abab0706c771e2ea6d53d26f379e5c8420b768/diff:/var/lib/docker/overlay2/c639594ead9cef5a157dcd6c5d3b58acfb87a1b54e09f09a89e5efe42a0250cb/diff:/var/lib/docker/overlay2/22d4ffdeda2486e79e77cdf6b2966c4e3f7a7c1d385f6914cf9abbbafd681fc5/diff:/var/lib/docker/overlay2/06347ddaa20c499bc26010d7a1ef1ac9c484d7088bac49bc47d017af272c5c8b/diff:/var/lib/docker/overlay2/4039a84be3e1b1c0c36b2bd5611308130efae8b5d3993d514489c326b58181a2/diff:/var/lib/docker/overlay2/00ba3d7351a8d15c1f38c8a5267ac7da1315950a1583dfe162bbe06e240d4e4e/diff:/var/lib/docker/overlay2/b66091d419eb3b0a03f2363973ab6750206d5cb1e33c6a80f22ac7b1b1c20015/diff:/var/lib/docker/overlay2/60a3c3f90313e57450868dd29163b9746391dbc376387ee61b371e7753d
2a9ed/diff:/var/lib/docker/overlay2/a4077b320de983a23a73f3509a3b65aa35c912b90e61cf3446d45334952197cc/diff:/var/lib/docker/overlay2/87466c009c98c77512f99106ac7b5b4682f6d57d0895993878a55843dfde4f0a/diff:/var/lib/docker/overlay2/be9cd77fbde8968efd17d63e6bf10bab9ae227bf6efd5ff15488effa8ed534f4/diff:/var/lib/docker/overlay2/692a8a7c4d738fb8caee425a6243fdaf5a5c4e7fdb6bda1969cba3c7099060d9/diff:/var/lib/docker/overlay2/90779bbe942cebdf0402a74acd25799917448b7948891aaf60636bbb4410e2d5/diff:/var/lib/docker/overlay2/f403aa656638a54017c9beeb448df9b3957711bbf52e5e92e279dd6a8e3a1a7b/diff:/var/lib/docker/overlay2/3e3a096efd54b9035c41e17e3c469d848ce1cddc9ad895ed288525a89e7d5153/diff:/var/lib/docker/overlay2/71a400a65bb51da094b9d5b672bf3e4973957a356b0480e8fd559aa527c64638/diff:/var/lib/docker/overlay2/5ecbee969df6610687807dc48c221a03964af0e197a0b8f0b5c38b70ab38cf4c/diff:/var/lib/docker/overlay2/1f806f3d9e1cd280380c82dd805cd7489ed4ed1d66b824ad880754d19b08dfa2/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ecf4fffac5f9789e79a9e9309d410fd3ac70dfa9c7a613169d65c52951e3686e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ecf4fffac5f9789e79a9e9309d410fd3ac70dfa9c7a613169d65c52951e3686e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ecf4fffac5f9789e79a9e9309d410fd3ac70dfa9c7a613169d65c52951e3686e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220207194436-6868",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220207194436-6868/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220207194436-6868",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220207194436-6868",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220207194436-6868",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a67d8e6f3789107f066d09286a0b5214bcaae83bc2a80b9924fab00697b4c00d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49404"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49403"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49400"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49402"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49401"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a67d8e6f3789",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220207194436-6868": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "648fc6fa2728",
	                        "old-k8s-version-20220207194436-6868"
	                    ],
	                    "NetworkID": "da5de09917c1f3425d4cc609c0dd233cf5a9c621fdf0b0419beb9a21ca45fdd7",
	                    "EndpointID": "2d33c4f567ebc4a7d665d0b45135cc035ce9cc53cd234a039f116fba62731685",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220207194436-6868 -n old-k8s-version-20220207194436-6868
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20220207194436-6868 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-20220207194436-6868 logs -n 25: (1.423457659s)
helpers_test.go:253: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                      Args                      |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                             | newest-cni-20220207195220-6868                 | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:53:37 UTC | Mon, 07 Feb 2022 19:53:37 UTC |
	|         | newest-cni-20220207195220-6868                 |                                                |         |         |                               |                               |
	| start   | -p no-preload-20220207194713-6868              | no-preload-20220207194713-6868                 | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:48:39 UTC | Mon, 07 Feb 2022 19:54:17 UTC |
	|         | --memory=2200 --alsologtostderr                |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                    |                                                |         |         |                               |                               |
	|         | --driver=docker                                |                                                |         |         |                               |                               |
	|         | --container-runtime=docker                     |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.4-rc.0              |                                                |         |         |                               |                               |
	| start   | -p auto-20220207194241-6868                    | auto-20220207194241-6868                       | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:53:37 UTC | Mon, 07 Feb 2022 19:54:21 UTC |
	|         | --memory=2048                                  |                                                |         |         |                               |                               |
	|         | --alsologtostderr                              |                                                |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                  |                                                |         |         |                               |                               |
	|         | --driver=docker                                |                                                |         |         |                               |                               |
	|         | --container-runtime=docker                     |                                                |         |         |                               |                               |
	| ssh     | -p auto-20220207194241-6868                    | auto-20220207194241-6868                       | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:54:21 UTC | Mon, 07 Feb 2022 19:54:22 UTC |
	|         | pgrep -a kubelet                               |                                                |         |         |                               |                               |
	| ssh     | -p                                             | no-preload-20220207194713-6868                 | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:54:36 UTC | Mon, 07 Feb 2022 19:54:36 UTC |
	|         | no-preload-20220207194713-6868                 |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                     |                                                |         |         |                               |                               |
	| pause   | -p                                             | no-preload-20220207194713-6868                 | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:54:36 UTC | Mon, 07 Feb 2022 19:54:37 UTC |
	|         | no-preload-20220207194713-6868                 |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                         |                                                |         |         |                               |                               |
	| unpause | -p                                             | no-preload-20220207194713-6868                 | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:54:38 UTC | Mon, 07 Feb 2022 19:54:39 UTC |
	|         | no-preload-20220207194713-6868                 |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                         |                                                |         |         |                               |                               |
	| delete  | -p                                             | no-preload-20220207194713-6868                 | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:54:40 UTC | Mon, 07 Feb 2022 19:54:44 UTC |
	|         | no-preload-20220207194713-6868                 |                                                |         |         |                               |                               |
	| delete  | -p auto-20220207194241-6868                    | auto-20220207194241-6868                       | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:54:41 UTC | Mon, 07 Feb 2022 19:54:44 UTC |
	| delete  | -p                                             | no-preload-20220207194713-6868                 | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:54:44 UTC | Mon, 07 Feb 2022 19:54:44 UTC |
	|         | no-preload-20220207194713-6868                 |                                                |         |         |                               |                               |
	| start   | -p                                             | default-k8s-different-port-20220207194800-6868 | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:49:18 UTC | Mon, 07 Feb 2022 19:55:02 UTC |
	|         | default-k8s-different-port-20220207194800-6868 |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                |                                                |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444              |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=docker    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.3                   |                                                |         |         |                               |                               |
	| ssh     | -p                                             | default-k8s-different-port-20220207194800-6868 | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:55:13 UTC | Mon, 07 Feb 2022 19:55:14 UTC |
	|         | default-k8s-different-port-20220207194800-6868 |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                     |                                                |         |         |                               |                               |
	| pause   | -p                                             | default-k8s-different-port-20220207194800-6868 | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:55:14 UTC | Mon, 07 Feb 2022 19:55:15 UTC |
	|         | default-k8s-different-port-20220207194800-6868 |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                         |                                                |         |         |                               |                               |
	| unpause | -p                                             | default-k8s-different-port-20220207194800-6868 | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:55:16 UTC | Mon, 07 Feb 2022 19:55:16 UTC |
	|         | default-k8s-different-port-20220207194800-6868 |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                         |                                                |         |         |                               |                               |
	| delete  | -p                                             | default-k8s-different-port-20220207194800-6868 | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:55:18 UTC | Mon, 07 Feb 2022 19:55:21 UTC |
	|         | default-k8s-different-port-20220207194800-6868 |                                                |         |         |                               |                               |
	| delete  | -p                                             | default-k8s-different-port-20220207194800-6868 | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:55:21 UTC | Mon, 07 Feb 2022 19:55:21 UTC |
	|         | default-k8s-different-port-20220207194800-6868 |                                                |         |         |                               |                               |
	| start   | -p false-20220207194241-6868                   | false-20220207194241-6868                      | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:54:44 UTC | Mon, 07 Feb 2022 19:55:35 UTC |
	|         | --memory=2048                                  |                                                |         |         |                               |                               |
	|         | --alsologtostderr                              |                                                |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                  |                                                |         |         |                               |                               |
	|         | --cni=false --driver=docker                    |                                                |         |         |                               |                               |
	|         | --container-runtime=docker                     |                                                |         |         |                               |                               |
	| ssh     | -p false-20220207194241-6868                   | false-20220207194241-6868                      | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:55:35 UTC | Mon, 07 Feb 2022 19:55:36 UTC |
	|         | pgrep -a kubelet                               |                                                |         |         |                               |                               |
	| delete  | -p false-20220207194241-6868                   | false-20220207194241-6868                      | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:55:54 UTC | Mon, 07 Feb 2022 19:55:56 UTC |
	| start   | -p cilium-20220207194241-6868                  | cilium-20220207194241-6868                     | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:54:44 UTC | Mon, 07 Feb 2022 19:56:16 UTC |
	|         | --memory=2048                                  |                                                |         |         |                               |                               |
	|         | --alsologtostderr                              |                                                |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                  |                                                |         |         |                               |                               |
	|         | --cni=cilium --driver=docker                   |                                                |         |         |                               |                               |
	|         | --container-runtime=docker                     |                                                |         |         |                               |                               |
	| ssh     | -p cilium-20220207194241-6868                  | cilium-20220207194241-6868                     | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:56:21 UTC | Mon, 07 Feb 2022 19:56:21 UTC |
	|         | pgrep -a kubelet                               |                                                |         |         |                               |                               |
	| start   | -p                                             | old-k8s-version-20220207194436-6868            | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:51:16 UTC | Mon, 07 Feb 2022 19:56:31 UTC |
	|         | old-k8s-version-20220207194436-6868            |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                |                                                |         |         |                               |                               |
	|         | --wait=true --kvm-network=default              |                                                |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                  |                                                |         |         |                               |                               |
	|         | --disable-driver-mounts                        |                                                |         |         |                               |                               |
	|         | --keep-context=false                           |                                                |         |         |                               |                               |
	|         | --driver=docker                                |                                                |         |         |                               |                               |
	|         | --container-runtime=docker                     |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.16.0                   |                                                |         |         |                               |                               |
	| delete  | -p cilium-20220207194241-6868                  | cilium-20220207194241-6868                     | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:56:34 UTC | Mon, 07 Feb 2022 19:56:37 UTC |
	| ssh     | -p                                             | old-k8s-version-20220207194436-6868            | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:56:45 UTC | Mon, 07 Feb 2022 19:56:45 UTC |
	|         | old-k8s-version-20220207194436-6868            |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                     |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20220207194436-6868            | old-k8s-version-20220207194436-6868            | jenkins | v1.25.1 | Mon, 07 Feb 2022 19:56:48 UTC | Mon, 07 Feb 2022 19:56:49 UTC |
	|         | logs -n 25                                     |                                                |         |         |                               |                               |
	|---------|------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/02/07 19:56:37
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.17.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0207 19:56:37.737056  319486 out.go:297] Setting OutFile to fd 1 ...
	I0207 19:56:37.737139  319486 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:56:37.737150  319486 out.go:310] Setting ErrFile to fd 2...
	I0207 19:56:37.737154  319486 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:56:37.737264  319486 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/bin
	I0207 19:56:37.737528  319486 out.go:304] Setting JSON to false
	I0207 19:56:37.739548  319486 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":5954,"bootTime":1644257844,"procs":880,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0207 19:56:37.739637  319486 start.go:122] virtualization: kvm guest
	I0207 19:56:37.742442  319486 out.go:176] * [enable-default-cni-20220207194241-6868] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	I0207 19:56:37.744162  319486 out.go:176]   - MINIKUBE_LOCATION=13439
	I0207 19:56:37.742687  319486 notify.go:174] Checking for updates...
	I0207 19:56:37.745678  319486 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0207 19:56:37.747294  319486 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/kubeconfig
	I0207 19:56:37.748727  319486 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube
	I0207 19:56:37.750084  319486 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0207 19:56:37.750601  319486 config.go:176] Loaded profile config "calico-20220207194241-6868": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 19:56:37.750698  319486 config.go:176] Loaded profile config "custom-weave-20220207194241-6868": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 19:56:37.750810  319486 config.go:176] Loaded profile config "old-k8s-version-20220207194436-6868": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0207 19:56:37.750856  319486 driver.go:344] Setting default libvirt URI to qemu:///system
	I0207 19:56:37.798064  319486 docker.go:132] docker version: linux-20.10.12
	I0207 19:56:37.798181  319486 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 19:56:37.900330  319486 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:48 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-07 19:56:37.830466821 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663643648 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0207 19:56:37.900483  319486 docker.go:237] overlay module found
	I0207 19:56:37.902934  319486 out.go:176] * Using the docker driver based on user configuration
	I0207 19:56:37.902966  319486 start.go:281] selected driver: docker
	I0207 19:56:37.902975  319486 start.go:798] validating driver "docker" against <nil>
	I0207 19:56:37.902995  319486 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0207 19:56:37.903058  319486 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0207 19:56:37.903078  319486 out.go:241] ! Your cgroup does not allow setting memory.
	I0207 19:56:37.904679  319486 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0207 19:56:37.905383  319486 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 19:56:38.036879  319486 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:48 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-07 19:56:37.947207486 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663643648 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0207 19:56:38.037058  319486 start_flags.go:288] no existing cluster config was found, will generate one from the flags 
	I0207 19:56:38.037256  319486 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	E0207 19:56:38.037274  319486 start_flags.go:440] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0207 19:56:38.037293  319486 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0207 19:56:38.037313  319486 cni.go:93] Creating CNI manager for "bridge"
	I0207 19:56:38.037319  319486 start_flags.go:297] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0207 19:56:38.037333  319486 start_flags.go:302] config:
	{Name:enable-default-cni-20220207194241-6868 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:enable-default-cni-20220207194241-6868 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0207 19:56:38.040483  319486 out.go:176] * Starting control plane node enable-default-cni-20220207194241-6868 in cluster enable-default-cni-20220207194241-6868
	I0207 19:56:38.040548  319486 cache.go:120] Beginning downloading kic base image for docker with docker
	I0207 19:56:38.042219  319486 out.go:176] * Pulling base image ...
	I0207 19:56:38.042258  319486 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 19:56:38.042308  319486 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4
	I0207 19:56:38.042326  319486 cache.go:57] Caching tarball of preloaded images
	I0207 19:56:38.042491  319486 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon
	I0207 19:56:38.042683  319486 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0207 19:56:38.042698  319486 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.3 on docker
	I0207 19:56:38.042891  319486 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/enable-default-cni-20220207194241-6868/config.json ...
	I0207 19:56:38.042924  319486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/enable-default-cni-20220207194241-6868/config.json: {Name:mk898de46ea9ec877fa4c95af930d7a822852910 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:56:38.103395  319486 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon, skipping pull
	I0207 19:56:38.103435  319486 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 exists in daemon, skipping load
	I0207 19:56:38.103449  319486 cache.go:208] Successfully downloaded all kic artifacts
	I0207 19:56:38.103492  319486 start.go:313] acquiring machines lock for enable-default-cni-20220207194241-6868: {Name:mk73709fb6735ddb764f546b9a13e11a3c431366 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0207 19:56:38.103647  319486 start.go:317] acquired machines lock for "enable-default-cni-20220207194241-6868" in 125.934µs
	I0207 19:56:38.103685  319486 start.go:89] Provisioning new machine with config: &{Name:enable-default-cni-20220207194241-6868 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:enable-default-cni-20220207194241-6868 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0207 19:56:38.103804  319486 start.go:126] createHost starting for "" (driver="docker")
	I0207 19:56:38.267318  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:40.829469  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:37.293065  305415 pod_ready.go:102] pod "coredns-64897985d-xtzsf" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:39.296557  305415 pod_ready.go:102] pod "coredns-64897985d-xtzsf" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:41.289924  305415 pod_ready.go:97] error getting pod "coredns-64897985d-xtzsf" in "kube-system" namespace (skipping!): pods "coredns-64897985d-xtzsf" not found
	I0207 19:56:41.289961  305415 pod_ready.go:81] duration metric: took 6.024595526s waiting for pod "coredns-64897985d-xtzsf" in "kube-system" namespace to be "Ready" ...
	E0207 19:56:41.289973  305415 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-64897985d-xtzsf" in "kube-system" namespace (skipping!): pods "coredns-64897985d-xtzsf" not found
	I0207 19:56:41.289981  305415 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-xz5c6" in "kube-system" namespace to be "Ready" ...
	I0207 19:56:38.108099  319486 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0207 19:56:38.108410  319486 start.go:160] libmachine.API.Create for "enable-default-cni-20220207194241-6868" (driver="docker")
	I0207 19:56:38.108452  319486 client.go:168] LocalClient.Create starting
	I0207 19:56:38.108572  319486 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca.pem
	I0207 19:56:38.108611  319486 main.go:130] libmachine: Decoding PEM data...
	I0207 19:56:38.108635  319486 main.go:130] libmachine: Parsing certificate...
	I0207 19:56:38.108727  319486 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/cert.pem
	I0207 19:56:38.108758  319486 main.go:130] libmachine: Decoding PEM data...
	I0207 19:56:38.108779  319486 main.go:130] libmachine: Parsing certificate...
	I0207 19:56:38.109256  319486 cli_runner.go:133] Run: docker network inspect enable-default-cni-20220207194241-6868 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0207 19:56:38.158001  319486 cli_runner.go:180] docker network inspect enable-default-cni-20220207194241-6868 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0207 19:56:38.158096  319486 network_create.go:254] running [docker network inspect enable-default-cni-20220207194241-6868] to gather additional debugging logs...
	I0207 19:56:38.158125  319486 cli_runner.go:133] Run: docker network inspect enable-default-cni-20220207194241-6868
	W0207 19:56:38.198863  319486 cli_runner.go:180] docker network inspect enable-default-cni-20220207194241-6868 returned with exit code 1
	I0207 19:56:38.198908  319486 network_create.go:257] error running [docker network inspect enable-default-cni-20220207194241-6868]: docker network inspect enable-default-cni-20220207194241-6868: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: enable-default-cni-20220207194241-6868
	I0207 19:56:38.198935  319486 network_create.go:259] output of [docker network inspect enable-default-cni-20220207194241-6868]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: enable-default-cni-20220207194241-6868
	
	** /stderr **
	I0207 19:56:38.198994  319486 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0207 19:56:38.250262  319486 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-64d1bcee4c72 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:13:39:0e:a7}}
	I0207 19:56:38.251501  319486 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc000114190] misses:0}
	I0207 19:56:38.251553  319486 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 19:56:38.251574  319486 network_create.go:106] attempt to create docker network enable-default-cni-20220207194241-6868 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0207 19:56:38.251630  319486 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20220207194241-6868
	I0207 19:56:38.363334  319486 network_create.go:90] docker network enable-default-cni-20220207194241-6868 192.168.58.0/24 created
	I0207 19:56:38.363383  319486 kic.go:106] calculated static IP "192.168.58.2" for the "enable-default-cni-20220207194241-6868" container
	I0207 19:56:38.363468  319486 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0207 19:56:38.417380  319486 cli_runner.go:133] Run: docker volume create enable-default-cni-20220207194241-6868 --label name.minikube.sigs.k8s.io=enable-default-cni-20220207194241-6868 --label created_by.minikube.sigs.k8s.io=true
	I0207 19:56:38.469432  319486 oci.go:102] Successfully created a docker volume enable-default-cni-20220207194241-6868
	I0207 19:56:38.469534  319486 cli_runner.go:133] Run: docker run --rm --name enable-default-cni-20220207194241-6868-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20220207194241-6868 --entrypoint /usr/bin/test -v enable-default-cni-20220207194241-6868:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -d /var/lib
	I0207 19:56:39.247660  319486 oci.go:106] Successfully prepared a docker volume enable-default-cni-20220207194241-6868
	I0207 19:56:39.247735  319486 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 19:56:39.247760  319486 kic.go:179] Starting extracting preloaded images to volume ...
	I0207 19:56:39.247836  319486 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-20220207194241-6868:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0207 19:56:43.263668  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:45.264086  297158 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b6p26" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:43.301097  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:45.301464  305415 pod_ready.go:102] pod "coredns-64897985d-xz5c6" in "kube-system" namespace has status "Ready":"False"
	I0207 19:56:45.421672  319486 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-20220207194241-6868:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir: (6.173774565s)
	I0207 19:56:45.421716  319486 kic.go:188] duration metric: took 6.173953 seconds to extract preloaded images to volume
	W0207 19:56:45.421773  319486 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0207 19:56:45.421788  319486 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0207 19:56:45.421860  319486 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0207 19:56:45.548198  319486 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-20220207194241-6868 --name enable-default-cni-20220207194241-6868 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20220207194241-6868 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-20220207194241-6868 --network enable-default-cni-20220207194241-6868 --ip 192.168.58.2 --volume enable-default-cni-20220207194241-6868:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8
	I0207 19:56:46.122462  319486 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207194241-6868 --format={{.State.Running}}
	I0207 19:56:46.173476  319486 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207194241-6868 --format={{.State.Status}}
	I0207 19:56:46.220551  319486 cli_runner.go:133] Run: docker exec enable-default-cni-20220207194241-6868 stat /var/lib/dpkg/alternatives/iptables
	I0207 19:56:46.309307  319486 oci.go:281] the created container "enable-default-cni-20220207194241-6868" has a running status.
	I0207 19:56:46.309344  319486 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/enable-default-cni-20220207194241-6868/id_rsa...
	I0207 19:56:46.566975  319486 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/enable-default-cni-20220207194241-6868/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0207 19:56:46.667459  319486 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207194241-6868 --format={{.State.Status}}
	I0207 19:56:46.721552  319486 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0207 19:56:46.721584  319486 kic_runner.go:114] Args: [docker exec --privileged enable-default-cni-20220207194241-6868 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0207 19:56:46.838052  319486 cli_runner.go:133] Run: docker container inspect enable-default-cni-20220207194241-6868 --format={{.State.Status}}
	I0207 19:56:46.890734  319486 machine.go:88] provisioning docker machine ...
	I0207 19:56:46.890797  319486 ubuntu.go:169] provisioning hostname "enable-default-cni-20220207194241-6868"
	I0207 19:56:46.890869  319486 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220207194241-6868
	I0207 19:56:46.930992  319486 main.go:130] libmachine: Using SSH client type: native
	I0207 19:56:46.931242  319486 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil>  [] 0s} 127.0.0.1 49429 <nil> <nil>}
	I0207 19:56:46.931267  319486 main.go:130] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-20220207194241-6868 && echo "enable-default-cni-20220207194241-6868" | sudo tee /etc/hostname
	I0207 19:56:47.078700  319486 main.go:130] libmachine: SSH cmd err, output: <nil>: enable-default-cni-20220207194241-6868
	
	I0207 19:56:47.078795  319486 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220207194241-6868
	I0207 19:56:47.117628  319486 main.go:130] libmachine: Using SSH client type: native
	I0207 19:56:47.117802  319486 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil>  [] 0s} 127.0.0.1 49429 <nil> <nil>}
	I0207 19:56:47.117823  319486 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-20220207194241-6868' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-20220207194241-6868/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-20220207194241-6868' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0207 19:56:47.243344  319486 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0207 19:56:47.243382  319486 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube}
	I0207 19:56:47.243413  319486 ubuntu.go:177] setting up certificates
	I0207 19:56:47.243425  319486 provision.go:83] configureAuth start
	I0207 19:56:47.243485  319486 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-20220207194241-6868
	I0207 19:56:47.289604  319486 provision.go:138] copyHostCerts
	I0207 19:56:47.289687  319486 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/ca.pem, removing ...
	I0207 19:56:47.289727  319486 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/ca.pem
	I0207 19:56:47.289816  319486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/ca.pem (1078 bytes)
	I0207 19:56:47.289925  319486 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cert.pem, removing ...
	I0207 19:56:47.289953  319486 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cert.pem
	I0207 19:56:47.289990  319486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cert.pem (1123 bytes)
	I0207 19:56:47.290063  319486 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/key.pem, removing ...
	I0207 19:56:47.290078  319486 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/key.pem
	I0207 19:56:47.290110  319486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/key.pem (1675 bytes)
	I0207 19:56:47.290176  319486 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-20220207194241-6868 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube enable-default-cni-20220207194241-6868]
	I0207 19:56:47.637071  319486 provision.go:172] copyRemoteCerts
	I0207 19:56:47.637162  319486 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0207 19:56:47.637217  319486 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220207194241-6868
	I0207 19:56:47.692739  319486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49429 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/enable-default-cni-20220207194241-6868/id_rsa Username:docker}
	I0207 19:56:47.794391  319486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/server.pem --> /etc/docker/server.pem (1285 bytes)
	I0207 19:56:47.823452  319486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0207 19:56:47.847687  319486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0207 19:56:47.874239  319486 provision.go:86] duration metric: configureAuth took 630.79691ms
	I0207 19:56:47.874282  319486 ubuntu.go:193] setting minikube options for container-runtime
	I0207 19:56:47.874540  319486 config.go:176] Loaded profile config "enable-default-cni-20220207194241-6868": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 19:56:47.874605  319486 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220207194241-6868
	I0207 19:56:47.926566  319486 main.go:130] libmachine: Using SSH client type: native
	I0207 19:56:47.926757  319486 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil>  [] 0s} 127.0.0.1 49429 <nil> <nil>}
	I0207 19:56:47.926779  319486 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0207 19:56:48.064753  319486 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0207 19:56:48.064783  319486 ubuntu.go:71] root file system type: overlay
	I0207 19:56:48.064955  319486 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0207 19:56:48.065011  319486 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220207194241-6868
	I0207 19:56:48.120132  319486 main.go:130] libmachine: Using SSH client type: native
	I0207 19:56:48.120328  319486 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil>  [] 0s} 127.0.0.1 49429 <nil> <nil>}
	I0207 19:56:48.120423  319486 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0207 19:56:48.278177  319486 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0207 19:56:48.278257  319486 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220207194241-6868
	I0207 19:56:48.332341  319486 main.go:130] libmachine: Using SSH client type: native
	I0207 19:56:48.332601  319486 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil>  [] 0s} 127.0.0.1 49429 <nil> <nil>}
	I0207 19:56:48.332628  319486 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0207 19:56:49.218079  319486 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-12-13 11:43:42.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-02-07 19:56:48.271737554 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0207 19:56:49.218124  319486 machine.go:91] provisioned docker machine in 2.327359867s
	I0207 19:56:49.218138  319486 client.go:171] LocalClient.Create took 11.109679339s
	I0207 19:56:49.218158  319486 start.go:168] duration metric: libmachine.API.Create for "enable-default-cni-20220207194241-6868" took 11.109749172s
	I0207 19:56:49.218172  319486 start.go:267] post-start starting for "enable-default-cni-20220207194241-6868" (driver="docker")
	I0207 19:56:49.218184  319486 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0207 19:56:49.218257  319486 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0207 19:56:49.218308  319486 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220207194241-6868
	I0207 19:56:49.265104  319486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49429 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/enable-default-cni-20220207194241-6868/id_rsa Username:docker}
	I0207 19:56:49.364700  319486 ssh_runner.go:195] Run: cat /etc/os-release
	I0207 19:56:49.367990  319486 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0207 19:56:49.368023  319486 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0207 19:56:49.368036  319486 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0207 19:56:49.368043  319486 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0207 19:56:49.368057  319486 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/addons for local assets ...
	I0207 19:56:49.368124  319486 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/files for local assets ...
	I0207 19:56:49.368210  319486 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/files/etc/ssl/certs/68682.pem -> 68682.pem in /etc/ssl/certs
	I0207 19:56:49.368312  319486 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0207 19:56:49.376309  319486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/files/etc/ssl/certs/68682.pem --> /etc/ssl/certs/68682.pem (1708 bytes)
	I0207 19:56:49.400818  319486 start.go:270] post-start completed in 182.626213ms
	I0207 19:56:49.401246  319486 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-20220207194241-6868
	I0207 19:56:49.441338  319486 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/enable-default-cni-20220207194241-6868/config.json ...
	I0207 19:56:49.441640  319486 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0207 19:56:49.441686  319486 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220207194241-6868
	I0207 19:56:49.488734  319486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49429 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/enable-default-cni-20220207194241-6868/id_rsa Username:docker}
	I0207 19:56:49.576465  319486 start.go:129] duration metric: createHost completed in 11.47264733s
	I0207 19:56:49.576494  319486 start.go:80] releasing machines lock for "enable-default-cni-20220207194241-6868", held for 11.472826717s
	I0207 19:56:49.576605  319486 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-20220207194241-6868
	I0207 19:56:49.624101  319486 ssh_runner.go:195] Run: systemctl --version
	I0207 19:56:49.624147  319486 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0207 19:56:49.624168  319486 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220207194241-6868
	I0207 19:56:49.624206  319486 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20220207194241-6868
	I0207 19:56:49.665923  319486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49429 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/enable-default-cni-20220207194241-6868/id_rsa Username:docker}
	I0207 19:56:49.667617  319486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49429 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/enable-default-cni-20220207194241-6868/id_rsa Username:docker}
	I0207 19:56:49.755569  319486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0207 19:56:49.783563  319486 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0207 19:56:49.794027  319486 cruntime.go:272] skipping containerd shutdown because we are bound to it
	I0207 19:56:49.794091  319486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0207 19:56:49.805302  319486 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0207 19:56:49.820138  319486 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0207 19:56:49.925652  319486 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0207 19:56:50.009980  319486 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0207 19:56:50.020490  319486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0207 19:56:50.146393  319486 ssh_runner.go:195] Run: sudo systemctl start docker
	I0207 19:56:50.159652  319486 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0207 19:56:50.217225  319486 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0207 19:56:50.277719  319486 out.go:203] * Preparing Kubernetes v1.23.3 on Docker 20.10.12 ...
	I0207 19:56:50.277819  319486 cli_runner.go:133] Run: docker network inspect enable-default-cni-20220207194241-6868 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0207 19:56:50.323443  319486 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0207 19:56:50.327217  319486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-02-07 19:54:29 UTC, end at Mon 2022-02-07 19:56:51 UTC. --
	Feb 07 19:54:31 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:54:31.883815565Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 07 19:54:31 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:54:31.919791127Z" level=info msg="Loading containers: done."
	Feb 07 19:54:31 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:54:31.932060851Z" level=info msg="Docker daemon" commit=459d0df graphdriver(s)=overlay2 version=20.10.12
	Feb 07 19:54:31 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:54:31.932147988Z" level=info msg="Daemon has completed initialization"
	Feb 07 19:54:31 old-k8s-version-20220207194436-6868 systemd[1]: Started Docker Application Container Engine.
	Feb 07 19:54:31 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:54:31.951551504Z" level=info msg="API listen on [::]:2376"
	Feb 07 19:54:31 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:54:31.955652311Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 07 19:55:15 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:55:15.462862785Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Feb 07 19:55:15 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:55:15.462918362Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Feb 07 19:55:15 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:55:15.465143198Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Feb 07 19:55:16 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:55:16.699777424Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Feb 07 19:55:16 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:55:16.945032971Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Feb 07 19:55:21 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:55:21.129775798Z" level=info msg="ignoring event" container=869dd1f73299bd0b45c6e89adc58d79abb29a321277df645d66d2e184062c43d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 19:55:22 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:55:22.044596540Z" level=info msg="ignoring event" container=a164e1d4889c88d6dcdbc10a152566832564e1a050f3176e48b4d4569fcf3b31 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 19:55:30 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:55:30.231442199Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Feb 07 19:55:30 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:55:30.231517839Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Feb 07 19:55:30 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:55:30.401394519Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Feb 07 19:55:37 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:55:37.249455874Z" level=info msg="ignoring event" container=723a3448d5f37a421ceeb38e8ec7d159b03b3029bc779f5c349887a001f97292 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 19:55:53 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:55:53.074290514Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Feb 07 19:55:53 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:55:53.074394711Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Feb 07 19:55:53 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:55:53.076385609Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Feb 07 19:56:04 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:56:04.331182933Z" level=info msg="ignoring event" container=2f193b948eab195589a13ce368f65044625c7c774630e0b95bc29ce5b042e91f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 07 19:56:46 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:56:46.079569927Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Feb 07 19:56:46 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:56:46.079625765Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Feb 07 19:56:46 old-k8s-version-20220207194436-6868 dockerd[458]: time="2022-02-07T19:56:46.081883337Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID
	2f193b948eab1       a90209bb39e3d       51 seconds ago       Exited              dashboard-metrics-scraper   3                   88de8803491b2
	3326546c62eb1       e1482a24335a6       About a minute ago   Running             kubernetes-dashboard        0                   d399a17c6ac99
	886d7532e9019       6e38f40d628db       About a minute ago   Running             storage-provisioner         0                   d3d718b05d1b9
	3cad7072d4135       bf261d1579144       About a minute ago   Running             coredns                     0                   54c673a142aee
	e6652b92a15b3       c21b0c7400f98       About a minute ago   Running             kube-proxy                  0                   87783008c830b
	c08b0cd64d6db       301ddc62b80b1       2 minutes ago        Running             kube-scheduler              0                   0e1fc7447b1db
	633fbb04fe35e       b2756210eeabf       2 minutes ago        Running             etcd                        0                   3f1ac95625308
	7e5c060473dfc       06a629a7e51cd       2 minutes ago        Running             kube-controller-manager     0                   ca9d427ad948c
	118b31137a1fe       b305571ca60a5       2 minutes ago        Running             kube-apiserver              0                   8174bcd3e8f59
	
	* 
	* ==> coredns [3cad7072d413] <==
	* .:53
	2022-02-07T19:55:14.964Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2022-02-07T19:55:14.964Z [INFO] CoreDNS-1.6.2
	2022-02-07T19:55:14.964Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2022-02-07T19:55:50.575Z [INFO] plugin/reload: Running configuration MD5 = bca3ea372abcb69ab498841fb7d6d24e
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20220207194436-6868
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20220207194436-6868
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=68b41900649d825bc98a620f335c8941b16741bb
	                    minikube.k8s.io/name=old-k8s-version-20220207194436-6868
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_02_07T19_54_57_0700
	                    minikube.k8s.io/version=v1.25.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Feb 2022 19:54:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Feb 2022 19:56:46 +0000   Mon, 07 Feb 2022 19:54:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Feb 2022 19:56:46 +0000   Mon, 07 Feb 2022 19:54:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Feb 2022 19:56:46 +0000   Mon, 07 Feb 2022 19:54:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Feb 2022 19:56:46 +0000   Mon, 07 Feb 2022 19:54:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-20220207194436-6868
	Capacity:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32874652Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32874652Ki
	 pods:               110
	System Info:
	 Machine ID:                 f0d9fc3b84d34ab4ba684459888f0938
	 System UUID:                083453d1-7edc-492c-81f2-6224d6bd0799
	 Boot ID:                    1510a09b-8b2d-457e-ae3f-04f8be50f6e3
	 Kernel Version:             5.11.0-1029-gcp
	 OS Image:                   Ubuntu 20.04.2 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  docker://20.10.12
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (10 in total)
	  Namespace                  Name                                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-wghxc                                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     100s
	  kube-system                etcd-old-k8s-version-20220207194436-6868                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kube-system                kube-apiserver-old-k8s-version-20220207194436-6868             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  kube-system                kube-controller-manager-old-k8s-version-20220207194436-6868    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  kube-system                kube-proxy-vxvst                                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                kube-scheduler-old-k8s-version-20220207194436-6868             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	  kube-system                metrics-server-5b7b789f-l4rz7                                  100m (1%!)(MISSING)     0 (0%!)(MISSING)      300Mi (0%!)(MISSING)       0 (0%!)(MISSING)         97s
	  kube-system                storage-provisioner                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kubernetes-dashboard       dashboard-metrics-scraper-6b84985989-9lc6t                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kubernetes-dashboard       kubernetes-dashboard-766959b846-zlhv4                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (1%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From                                             Message
	  ----    ------                   ----                   ----                                             -------
	  Normal  NodeHasSufficientMemory  2m12s (x8 over 2m12s)  kubelet, old-k8s-version-20220207194436-6868     Node old-k8s-version-20220207194436-6868 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m12s (x8 over 2m12s)  kubelet, old-k8s-version-20220207194436-6868     Node old-k8s-version-20220207194436-6868 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m12s (x7 over 2m12s)  kubelet, old-k8s-version-20220207194436-6868     Node old-k8s-version-20220207194436-6868 status is now: NodeHasSufficientPID
	  Normal  Starting                 98s                    kube-proxy, old-k8s-version-20220207194436-6868  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000002] ll header: 00000000: ff ff ff ff ff ff fa 99 c8 1f b0 26 08 06
	[  +0.215273] IPv4: martian source 10.85.0.4 from 10.85.0.4, on dev eth0
	[  +0.000024] ll header: 00000000: ff ff ff ff ff ff 7e 65 29 2a 5f e9 08 06
	[  +0.165223] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9e 1f e9 75 c1 44 08 06
	[  +3.522174] IPv4: martian source 10.85.0.5 from 10.85.0.5, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff be b3 aa 34 8f 8a 08 06
	[  +0.453228] IPv4: martian source 10.85.0.6 from 10.85.0.6, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 a3 ae a8 0a ae 08 06
	[  +1.670853] IPv4: martian source 10.85.0.7 from 10.85.0.7, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4a f4 83 b2 6d 79 08 06
	[  +1.817833] IPv4: martian source 10.85.0.8 from 10.85.0.8, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 17 75 26 96 41 08 06
	[  +0.967120] IPv4: martian source 10.85.0.9 from 10.85.0.9, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 86 cb e9 73 9b 24 08 06
	[ +19.232672] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 ce 8b d9 25 4a 08 06
	[  +0.000011] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff e2 ce 8b d9 25 4a 08 06
	[  +0.939128] IPv4: martian source 10.85.0.3 from 10.85.0.3, on dev eth0
	[  +0.000024] ll header: 00000000: ff ff ff ff ff ff 52 41 c9 46 b4 15 08 06
	[  +8.463247] IPv4: martian source 10.85.0.4 from 10.85.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 46 2b ce bb c7 90 08 06
	[  +3.154543] IPv4: martian source 10.85.0.5 from 10.85.0.5, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e 8c 0a 42 3f be 08 06
	
	* 
	* ==> etcd [633fbb04fe35] <==
	* 2022-02-07 19:54:55.314724 W | etcdserver: read-only range request "key:\"/registry/events/default/old-k8s-version-20220207194436-6868.16d1991ac3602739\" " with result "range_response_count:1 size:535" took too long (295.592843ms) to execute
	2022-02-07 19:54:55.314787 W | etcdserver: request "header:<ID:15638326212616165164 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:coredns\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:coredns\" value_size:206 >> failure:<>>" with result "size:16" took too long (239.487938ms) to execute
	2022-02-07 19:54:55.314887 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/bootstrap-signer\" " with result "range_response_count:0 size:5" took too long (293.342153ms) to execute
	2022-02-07 19:54:55.628400 W | etcdserver: request "header:<ID:15638326212616165168 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterrolebindings/system:coredns\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/system:coredns\" value_size:241 >> failure:<>>" with result "size:16" took too long (204.569204ms) to execute
	2022-02-07 19:54:55.628518 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:178" took too long (309.909407ms) to execute
	2022-02-07 19:54:55.955467 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/coredns\" " with result "range_response_count:1 size:181" took too long (228.667113ms) to execute
	2022-02-07 19:54:55.955507 W | etcdserver: request "header:<ID:15638326212616165177 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-old-k8s-version-20220207194436-6868.16d1991afe90631b\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-old-k8s-version-20220207194436-6868.16d1991afe90631b\" value_size:442 lease:6414954175761389303 >> failure:<>>" with result "size:16" took too long (218.600182ms) to execute
	2022-02-07 19:54:56.188158 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:178" took too long (225.552258ms) to execute
	2022-02-07 19:54:56.222152 W | etcdserver: read-only range request "key:\"/registry/ranges/serviceips\" " with result "range_response_count:1 size:117" took too long (255.119529ms) to execute
	2022-02-07 19:54:56.605440 W | etcdserver: request "header:<ID:15638326212616165191 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/ranges/serviceips\" mod_revision:148 > success:<request_put:<key:\"/registry/ranges/serviceips\" value_size:71 >> failure:<request_range:<key:\"/registry/ranges/serviceips\" > >>" with result "size:16" took too long (291.024777ms) to execute
	2022-02-07 19:54:56.605663 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller\" " with result "range_response_count:1 size:236" took too long (380.347763ms) to execute
	2022-02-07 19:54:56.845451 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/kube-proxy\" " with result "range_response_count:1 size:187" took too long (174.364595ms) to execute
	2022-02-07 19:54:56.932885 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/resourcequota-controller\" " with result "range_response_count:0 size:5" took too long (258.090696ms) to execute
	2022-02-07 19:54:57.628805 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/statefulset-controller\" " with result "range_response_count:1 size:212" took too long (125.601489ms) to execute
	2022-02-07 19:54:57.887340 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/certificate-controller\" " with result "range_response_count:1 size:212" took too long (173.479785ms) to execute
	2022-02-07 19:55:30.204955 W | etcdserver: request "header:<ID:15638326212616165855 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-5b7b789f-l4rz7\" mod_revision:469 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-5b7b789f-l4rz7\" value_size:1769 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-5b7b789f-l4rz7\" > >>" with result "size:16" took too long (108.93138ms) to execute
	2022-02-07 19:55:30.594424 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/metrics-server-5b7b789f-l4rz7.16d19923356b2380\" " with result "range_response_count:1 size:501" took too long (189.478871ms) to execute
	2022-02-07 19:55:59.244667 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-old-k8s-version-20220207194436-6868\" " with result "range_response_count:1 size:1926" took too long (190.778076ms) to execute
	2022-02-07 19:55:59.542546 W | etcdserver: request "header:<ID:15638326212616165995 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/etcd-old-k8s-version-20220207194436-6868\" mod_revision:551 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-old-k8s-version-20220207194436-6868\" value_size:2273 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-old-k8s-version-20220207194436-6868\" > >>" with result "size:16" took too long (194.761455ms) to execute
	2022-02-07 19:55:59.542762 W | etcdserver: read-only range request "key:\"/registry/certificatesigningrequests\" range_end:\"/registry/certificatesigningrequestt\" count_only:true " with result "range_response_count:0 size:7" took too long (199.743782ms) to execute
	2022-02-07 19:55:59.939198 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses\" range_end:\"/registry/runtimeclasset\" count_only:true " with result "range_response_count:0 size:5" took too long (150.233121ms) to execute
	2022-02-07 19:55:59.939357 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (278.995783ms) to execute
	2022-02-07 19:56:41.847511 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings\" range_end:\"/registry/clusterrolebindingt\" count_only:true " with result "range_response_count:0 size:7" took too long (121.706812ms) to execute
	2022-02-07 19:56:41.847656 W | etcdserver: read-only range request "key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" " with result "range_response_count:2 size:4055" took too long (102.749872ms) to execute
	2022-02-07 19:56:41.847751 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:215" took too long (121.508166ms) to execute
	
	* 
	* ==> kernel <==
	*  19:56:52 up  1:39,  0 users,  load average: 8.80, 5.59, 3.61
	Linux old-k8s-version-20220207194436-6868 5.11.0-1029-gcp #33~20.04.3-Ubuntu SMP Tue Jan 18 12:03:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [118b31137a1f] <==
	* I0207 19:54:53.996248       1 trace.go:116] Trace[298250682]: "GuaranteedUpdate etcd3" type:*core.Event (started: 2022-02-07 19:54:53.477308684 +0000 UTC m=+13.108100383) (total time: 518.897725ms):
	Trace[298250682]: [518.842783ms] [429.037573ms] Transaction committed
	I0207 19:54:53.996561       1 trace.go:116] Trace[981877381]: "Patch" url:/api/v1/namespaces/default/events/old-k8s-version-20220207194436-6868.16d1991ac3602739 (started: 2022-02-07 19:54:53.477217291 +0000 UTC m=+13.108009001) (total time: 519.301969ms):
	Trace[981877381]: [89.491101ms] [89.453296ms] About to apply patch
	Trace[981877381]: [519.208192ms] [429.550827ms] Object stored in database
	I0207 19:54:55.739567       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0207 19:54:56.606448       1 trace.go:116] Trace[1673789357]: "GuaranteedUpdate etcd3" type:*core.RangeAllocation (started: 2022-02-07 19:54:55.966683762 +0000 UTC m=+15.597475442) (total time: 639.722259ms):
	Trace[1673789357]: [257.654995ms] [257.654995ms] initial value restored
	Trace[1673789357]: [639.701033ms] [382.024779ms] Transaction committed
	I0207 19:54:56.608458       1 trace.go:116] Trace[291133068]: "Create" url:/api/v1/namespaces/kube-system/services (started: 2022-02-07 19:54:55.965578053 +0000 UTC m=+15.596369753) (total time: 642.85178ms):
	Trace[291133068]: [642.501827ms] [641.51465ms] Object stored in database
	I0207 19:54:56.861130       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0207 19:55:11.087151       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0207 19:55:11.108057       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I0207 19:55:11.605839       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0207 19:55:16.099497       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0207 19:55:16.099598       1 handler_proxy.go:99] no RequestInfo found in the context
	E0207 19:55:16.099692       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0207 19:55:16.099711       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0207 19:56:16.100021       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0207 19:56:16.100130       1 handler_proxy.go:99] no RequestInfo found in the context
	E0207 19:56:16.100187       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0207 19:56:16.100204       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [7e5c060473df] <==
	* I0207 19:55:14.260341       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-766959b846", UID:"d5a31d76-27f9-4c0c-9ed4-a497c02385e9", APIVersion:"apps/v1", ResourceVersion:"403", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0207 19:55:14.264918       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-6b84985989" failed with pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0207 19:55:14.265284       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"52893cf8-18a2-45ec-8652-4fec0584cbaf", APIVersion:"apps/v1", ResourceVersion:"402", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0207 19:55:14.289196       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-766959b846" failed with pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0207 19:55:14.297870       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-6b84985989" failed with pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0207 19:55:14.298229       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"52893cf8-18a2-45ec-8652-4fec0584cbaf", APIVersion:"apps/v1", ResourceVersion:"402", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0207 19:55:14.352675       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-6b84985989" failed with pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0207 19:55:14.353041       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-766959b846" failed with pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0207 19:55:14.353064       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"52893cf8-18a2-45ec-8652-4fec0584cbaf", APIVersion:"apps/v1", ResourceVersion:"402", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0207 19:55:14.353089       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-766959b846", UID:"d5a31d76-27f9-4c0c-9ed4-a497c02385e9", APIVersion:"apps/v1", ResourceVersion:"409", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0207 19:55:14.370000       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-766959b846" failed with pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0207 19:55:14.370510       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-766959b846", UID:"d5a31d76-27f9-4c0c-9ed4-a497c02385e9", APIVersion:"apps/v1", ResourceVersion:"409", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0207 19:55:14.370709       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-6b84985989" failed with pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0207 19:55:14.370790       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"52893cf8-18a2-45ec-8652-4fec0584cbaf", APIVersion:"apps/v1", ResourceVersion:"402", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0207 19:55:14.375861       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-766959b846", UID:"d5a31d76-27f9-4c0c-9ed4-a497c02385e9", APIVersion:"apps/v1", ResourceVersion:"409", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0207 19:55:14.375965       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-766959b846" failed with pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0207 19:55:14.761954       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-5b7b789f", UID:"c06832a5-1156-4b67-94c0-e4b01dfc2a71", APIVersion:"apps/v1", ResourceVersion:"364", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-5b7b789f-l4rz7
	I0207 19:55:15.472656       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-766959b846", UID:"d5a31d76-27f9-4c0c-9ed4-a497c02385e9", APIVersion:"apps/v1", ResourceVersion:"409", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-766959b846-zlhv4
	I0207 19:55:15.472718       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"52893cf8-18a2-45ec-8652-4fec0584cbaf", APIVersion:"apps/v1", ResourceVersion:"402", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-6b84985989-9lc6t
	E0207 19:55:41.862230       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0207 19:55:43.612052       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0207 19:56:12.115509       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0207 19:56:15.613878       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0207 19:56:42.367580       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0207 19:56:47.615906       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [e6652b92a15b] <==
	* W0207 19:55:13.745693       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0207 19:55:13.785265       1 node.go:135] Successfully retrieved node IP: 192.168.76.2
	I0207 19:55:13.785319       1 server_others.go:149] Using iptables Proxier.
	I0207 19:55:13.795019       1 server.go:529] Version: v1.16.0
	I0207 19:55:13.796679       1 config.go:313] Starting service config controller
	I0207 19:55:13.806688       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0207 19:55:13.804720       1 config.go:131] Starting endpoints config controller
	I0207 19:55:13.807328       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0207 19:55:13.937788       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0207 19:55:13.937869       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [c08b0cd64d6d] <==
	* E0207 19:54:48.570533       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0207 19:54:48.571666       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0207 19:54:48.572721       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0207 19:54:49.557144       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0207 19:54:49.565004       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0207 19:54:49.565618       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0207 19:54:49.566722       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0207 19:54:49.567745       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0207 19:54:49.568790       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0207 19:54:49.569788       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0207 19:54:49.570958       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0207 19:54:49.572058       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0207 19:54:49.573213       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0207 19:54:49.574453       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0207 19:54:50.558557       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0207 19:54:50.566449       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0207 19:54:50.567248       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0207 19:54:50.568285       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0207 19:54:50.569241       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0207 19:54:50.570447       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0207 19:54:50.571305       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0207 19:54:50.572558       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0207 19:54:50.573630       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0207 19:54:50.574795       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0207 19:54:50.575777       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-02-07 19:54:29 UTC, end at Mon 2022-02-07 19:56:52 UTC. --
	Feb 07 19:55:30 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:55:30.402165    1272 pod_workers.go:191] Error syncing pod 38641e20-0ebe-4eaa-ba94-68e627b0e83b ("metrics-server-5b7b789f-l4rz7_kube-system(38641e20-0ebe-4eaa-ba94-68e627b0e83b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Feb 07 19:55:37 old-k8s-version-20220207194436-6868 kubelet[1272]: W0207 19:55:37.996040    1272 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-9lc6t through plugin: invalid network status for
	Feb 07 19:55:38 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:55:38.001951    1272 pod_workers.go:191] Error syncing pod f64f2170-dbf3-4719-b7d6-261186e5b287 ("dashboard-metrics-scraper-6b84985989-9lc6t_kubernetes-dashboard(f64f2170-dbf3-4719-b7d6-261186e5b287)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-9lc6t_kubernetes-dashboard(f64f2170-dbf3-4719-b7d6-261186e5b287)"
	Feb 07 19:55:39 old-k8s-version-20220207194436-6868 kubelet[1272]: W0207 19:55:39.009073    1272 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-9lc6t through plugin: invalid network status for
	Feb 07 19:55:41 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:55:41.048907    1272 pod_workers.go:191] Error syncing pod 38641e20-0ebe-4eaa-ba94-68e627b0e83b ("metrics-server-5b7b789f-l4rz7_kube-system(38641e20-0ebe-4eaa-ba94-68e627b0e83b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Feb 07 19:55:45 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:55:45.393821    1272 pod_workers.go:191] Error syncing pod f64f2170-dbf3-4719-b7d6-261186e5b287 ("dashboard-metrics-scraper-6b84985989-9lc6t_kubernetes-dashboard(f64f2170-dbf3-4719-b7d6-261186e5b287)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-9lc6t_kubernetes-dashboard(f64f2170-dbf3-4719-b7d6-261186e5b287)"
	Feb 07 19:55:53 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:55:53.076916    1272 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Feb 07 19:55:53 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:55:53.076970    1272 kuberuntime_image.go:50] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Feb 07 19:55:53 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:55:53.077030    1272 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Feb 07 19:55:53 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:55:53.077059    1272 pod_workers.go:191] Error syncing pod 38641e20-0ebe-4eaa-ba94-68e627b0e83b ("metrics-server-5b7b789f-l4rz7_kube-system(38641e20-0ebe-4eaa-ba94-68e627b0e83b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Feb 07 19:56:04 old-k8s-version-20220207194436-6868 kubelet[1272]: W0207 19:56:04.188791    1272 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-9lc6t through plugin: invalid network status for
	Feb 07 19:56:05 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:56:05.047787    1272 pod_workers.go:191] Error syncing pod 38641e20-0ebe-4eaa-ba94-68e627b0e83b ("metrics-server-5b7b789f-l4rz7_kube-system(38641e20-0ebe-4eaa-ba94-68e627b0e83b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Feb 07 19:56:05 old-k8s-version-20220207194436-6868 kubelet[1272]: W0207 19:56:05.318841    1272 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-9lc6t through plugin: invalid network status for
	Feb 07 19:56:05 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:56:05.326552    1272 pod_workers.go:191] Error syncing pod f64f2170-dbf3-4719-b7d6-261186e5b287 ("dashboard-metrics-scraper-6b84985989-9lc6t_kubernetes-dashboard(f64f2170-dbf3-4719-b7d6-261186e5b287)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-9lc6t_kubernetes-dashboard(f64f2170-dbf3-4719-b7d6-261186e5b287)"
	Feb 07 19:56:06 old-k8s-version-20220207194436-6868 kubelet[1272]: W0207 19:56:06.333677    1272 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-9lc6t through plugin: invalid network status for
	Feb 07 19:56:06 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:56:06.338559    1272 pod_workers.go:191] Error syncing pod f64f2170-dbf3-4719-b7d6-261186e5b287 ("dashboard-metrics-scraper-6b84985989-9lc6t_kubernetes-dashboard(f64f2170-dbf3-4719-b7d6-261186e5b287)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-9lc6t_kubernetes-dashboard(f64f2170-dbf3-4719-b7d6-261186e5b287)"
	Feb 07 19:56:17 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:56:17.047691    1272 pod_workers.go:191] Error syncing pod 38641e20-0ebe-4eaa-ba94-68e627b0e83b ("metrics-server-5b7b789f-l4rz7_kube-system(38641e20-0ebe-4eaa-ba94-68e627b0e83b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Feb 07 19:56:19 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:56:19.045967    1272 pod_workers.go:191] Error syncing pod f64f2170-dbf3-4719-b7d6-261186e5b287 ("dashboard-metrics-scraper-6b84985989-9lc6t_kubernetes-dashboard(f64f2170-dbf3-4719-b7d6-261186e5b287)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-9lc6t_kubernetes-dashboard(f64f2170-dbf3-4719-b7d6-261186e5b287)"
	Feb 07 19:56:31 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:56:31.046729    1272 pod_workers.go:191] Error syncing pod 38641e20-0ebe-4eaa-ba94-68e627b0e83b ("metrics-server-5b7b789f-l4rz7_kube-system(38641e20-0ebe-4eaa-ba94-68e627b0e83b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Feb 07 19:56:32 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:56:32.045383    1272 pod_workers.go:191] Error syncing pod f64f2170-dbf3-4719-b7d6-261186e5b287 ("dashboard-metrics-scraper-6b84985989-9lc6t_kubernetes-dashboard(f64f2170-dbf3-4719-b7d6-261186e5b287)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-9lc6t_kubernetes-dashboard(f64f2170-dbf3-4719-b7d6-261186e5b287)"
	Feb 07 19:56:43 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:56:43.045362    1272 pod_workers.go:191] Error syncing pod f64f2170-dbf3-4719-b7d6-261186e5b287 ("dashboard-metrics-scraper-6b84985989-9lc6t_kubernetes-dashboard(f64f2170-dbf3-4719-b7d6-261186e5b287)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-9lc6t_kubernetes-dashboard(f64f2170-dbf3-4719-b7d6-261186e5b287)"
	Feb 07 19:56:46 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:56:46.082455    1272 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Feb 07 19:56:46 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:56:46.082504    1272 kuberuntime_image.go:50] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Feb 07 19:56:46 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:56:46.082569    1272 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Feb 07 19:56:46 old-k8s-version-20220207194436-6868 kubelet[1272]: E0207 19:56:46.082600    1272 pod_workers.go:191] Error syncing pod 38641e20-0ebe-4eaa-ba94-68e627b0e83b ("metrics-server-5b7b789f-l4rz7_kube-system(38641e20-0ebe-4eaa-ba94-68e627b0e83b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	
	* 
	* ==> kubernetes-dashboard [3326546c62eb] <==
	* 2022/02/07 19:55:16 Using namespace: kubernetes-dashboard
	2022/02/07 19:55:16 Using in-cluster config to connect to apiserver
	2022/02/07 19:55:16 Using secret token for csrf signing
	2022/02/07 19:55:16 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/02/07 19:55:16 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/02/07 19:55:16 Successful initial request to the apiserver, version: v1.16.0
	2022/02/07 19:55:16 Generating JWE encryption key
	2022/02/07 19:55:16 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/02/07 19:55:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/02/07 19:55:16 Initializing JWE encryption key from synchronized object
	2022/02/07 19:55:16 Creating in-cluster Sidecar client
	2022/02/07 19:55:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/02/07 19:55:16 Serving insecurely on HTTP port: 9090
	2022/02/07 19:55:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/02/07 19:56:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/02/07 19:56:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/02/07 19:55:16 Starting overwatch
	
	* 
	* ==> storage-provisioner [886d7532e901] <==
	* I0207 19:55:14.864090       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0207 19:55:14.942229       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0207 19:55:14.942301       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0207 19:55:14.970919       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0207 19:55:14.971067       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0a398cac-aeae-4661-b698-55cee3afd866", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-20220207194436-6868_7308e4d0-7f2d-42b1-81bc-5de07deb4fbb became leader
	I0207 19:55:14.971320       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20220207194436-6868_7308e4d0-7f2d-42b1-81bc-5de07deb4fbb!
	I0207 19:55:15.073456       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20220207194436-6868_7308e4d0-7f2d-42b1-81bc-5de07deb4fbb!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220207194436-6868 -n old-k8s-version-20220207194436-6868
helpers_test.go:262: (dbg) Run:  kubectl --context old-k8s-version-20220207194436-6868 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-5b7b789f-l4rz7
helpers_test.go:273: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20220207194436-6868 describe pod metrics-server-5b7b789f-l4rz7
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220207194436-6868 describe pod metrics-server-5b7b789f-l4rz7: exit status 1 (89.752214ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5b7b789f-l4rz7" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context old-k8s-version-20220207194436-6868 describe pod metrics-server-5b7b789f-l4rz7: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (7.29s)
E0207 20:04:15.568039    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/old-k8s-version-20220207194436-6868/client.crt: no such file or directory
E0207 20:04:16.238282    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/addons-20220207191713-6868/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (279.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20220207194241-6868 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=docker
E0207 19:57:19.286868    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/addons-20220207191713-6868/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kindnet-20220207194241-6868 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=docker: exit status 80 (4m39.830563122s)

                                                
                                                
-- stdout --
	* [kindnet-20220207194241-6868] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13439
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node kindnet-20220207194241-6868 in cluster kindnet-20220207194241-6868
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.3 on Docker 20.10.12 ...
	  - kubelet.housekeeping-interval=5m
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0207 19:56:56.895567  329832 out.go:297] Setting OutFile to fd 1 ...
	I0207 19:56:56.895691  329832 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:56:56.895697  329832 out.go:310] Setting ErrFile to fd 2...
	I0207 19:56:56.895703  329832 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:56:56.895834  329832 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/bin
	I0207 19:56:56.896221  329832 out.go:304] Setting JSON to false
	I0207 19:56:56.898758  329832 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":5973,"bootTime":1644257844,"procs":911,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0207 19:56:56.898866  329832 start.go:122] virtualization: kvm guest
	I0207 19:56:56.902083  329832 out.go:176] * [kindnet-20220207194241-6868] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	I0207 19:56:56.903688  329832 out.go:176]   - MINIKUBE_LOCATION=13439
	I0207 19:56:56.905276  329832 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0207 19:56:56.902306  329832 notify.go:174] Checking for updates...
	I0207 19:56:56.907056  329832 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/kubeconfig
	I0207 19:56:56.908656  329832 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube
	I0207 19:56:56.910256  329832 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0207 19:56:56.910875  329832 config.go:176] Loaded profile config "calico-20220207194241-6868": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 19:56:56.910978  329832 config.go:176] Loaded profile config "custom-weave-20220207194241-6868": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 19:56:56.911052  329832 config.go:176] Loaded profile config "enable-default-cni-20220207194241-6868": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 19:56:56.911108  329832 driver.go:344] Setting default libvirt URI to qemu:///system
	I0207 19:56:56.980665  329832 docker.go:132] docker version: linux-20.10.12
	I0207 19:56:56.980785  329832 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 19:56:57.116005  329832 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:48 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-07 19:56:57.02353984 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663643648 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0207 19:56:57.116193  329832 docker.go:237] overlay module found
	I0207 19:56:57.118923  329832 out.go:176] * Using the docker driver based on user configuration
	I0207 19:56:57.118950  329832 start.go:281] selected driver: docker
	I0207 19:56:57.118956  329832 start.go:798] validating driver "docker" against <nil>
	I0207 19:56:57.118975  329832 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0207 19:56:57.119015  329832 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0207 19:56:57.119034  329832 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0207 19:56:57.120654  329832 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0207 19:56:57.121282  329832 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 19:56:57.232415  329832 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:48 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-02-07 19:56:57.158414148 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663643648 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0207 19:56:57.232581  329832 start_flags.go:288] no existing cluster config was found, will generate one from the flags 
	I0207 19:56:57.232761  329832 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	I0207 19:56:57.232792  329832 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0207 19:56:57.232811  329832 cni.go:93] Creating CNI manager for "kindnet"
	I0207 19:56:57.232821  329832 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0207 19:56:57.232825  329832 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0207 19:56:57.232829  329832 start_flags.go:297] Found "CNI" CNI - setting NetworkPlugin=cni
	I0207 19:56:57.232847  329832 start_flags.go:302] config:
	{Name:kindnet-20220207194241-6868 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:kindnet-20220207194241-6868 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0207 19:56:57.235612  329832 out.go:176] * Starting control plane node kindnet-20220207194241-6868 in cluster kindnet-20220207194241-6868
	I0207 19:56:57.235666  329832 cache.go:120] Beginning downloading kic base image for docker with docker
	I0207 19:56:57.237599  329832 out.go:176] * Pulling base image ...
	I0207 19:56:57.237639  329832 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 19:56:57.237678  329832 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4
	I0207 19:56:57.237690  329832 cache.go:57] Caching tarball of preloaded images
	I0207 19:56:57.237742  329832 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon
	I0207 19:56:57.237983  329832 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0207 19:56:57.237997  329832 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.3 on docker
	I0207 19:56:57.238152  329832 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/kindnet-20220207194241-6868/config.json ...
	I0207 19:56:57.238176  329832 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/kindnet-20220207194241-6868/config.json: {Name:mkd762256af656b94efb4e88a51a6af5d5b544d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:56:57.316465  329832 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon, skipping pull
	I0207 19:56:57.316505  329832 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 exists in daemon, skipping load
	I0207 19:56:57.316527  329832 cache.go:208] Successfully downloaded all kic artifacts
	I0207 19:56:57.316579  329832 start.go:313] acquiring machines lock for kindnet-20220207194241-6868: {Name:mk57eeafb627d923429c4c41e4c95f4a9d09fa19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0207 19:56:57.316782  329832 start.go:317] acquired machines lock for "kindnet-20220207194241-6868" in 173.17µs
	I0207 19:56:57.316828  329832 start.go:89] Provisioning new machine with config: &{Name:kindnet-20220207194241-6868 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:kindnet-20220207194241-6868 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0207 19:56:57.316958  329832 start.go:126] createHost starting for "" (driver="docker")
	I0207 19:56:57.319913  329832 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0207 19:56:57.320243  329832 start.go:160] libmachine.API.Create for "kindnet-20220207194241-6868" (driver="docker")
	I0207 19:56:57.320287  329832 client.go:168] LocalClient.Create starting
	I0207 19:56:57.320438  329832 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca.pem
	I0207 19:56:57.320526  329832 main.go:130] libmachine: Decoding PEM data...
	I0207 19:56:57.320552  329832 main.go:130] libmachine: Parsing certificate...
	I0207 19:56:57.320609  329832 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/cert.pem
	I0207 19:56:57.320633  329832 main.go:130] libmachine: Decoding PEM data...
	I0207 19:56:57.320656  329832 main.go:130] libmachine: Parsing certificate...
	I0207 19:56:57.321132  329832 cli_runner.go:133] Run: docker network inspect kindnet-20220207194241-6868 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0207 19:56:57.373272  329832 cli_runner.go:180] docker network inspect kindnet-20220207194241-6868 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0207 19:56:57.373347  329832 network_create.go:254] running [docker network inspect kindnet-20220207194241-6868] to gather additional debugging logs...
	I0207 19:56:57.373378  329832 cli_runner.go:133] Run: docker network inspect kindnet-20220207194241-6868
	W0207 19:56:57.419374  329832 cli_runner.go:180] docker network inspect kindnet-20220207194241-6868 returned with exit code 1
	I0207 19:56:57.419411  329832 network_create.go:257] error running [docker network inspect kindnet-20220207194241-6868]: docker network inspect kindnet-20220207194241-6868: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-20220207194241-6868
	I0207 19:56:57.419430  329832 network_create.go:259] output of [docker network inspect kindnet-20220207194241-6868]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-20220207194241-6868
	
	** /stderr **
	I0207 19:56:57.419489  329832 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0207 19:56:57.469166  329832 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-64d1bcee4c72 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:13:39:0e:a7}}
	I0207 19:56:57.470262  329832 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-3a091fe05b9d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:67:02:cf:25}}
	I0207 19:56:57.471054  329832 network.go:240] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-ecb8700540dc IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:98:c3:a0:c5}}
	I0207 19:56:57.471994  329832 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.76.0:0xc000132118] misses:0}
	I0207 19:56:57.472037  329832 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0207 19:56:57.472053  329832 network_create.go:106] attempt to create docker network kindnet-20220207194241-6868 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0207 19:56:57.472114  329832 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220207194241-6868
	I0207 19:56:57.573842  329832 network_create.go:90] docker network kindnet-20220207194241-6868 192.168.76.0/24 created
	I0207 19:56:57.573889  329832 kic.go:106] calculated static IP "192.168.76.2" for the "kindnet-20220207194241-6868" container
	I0207 19:56:57.573973  329832 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0207 19:56:57.620087  329832 cli_runner.go:133] Run: docker volume create kindnet-20220207194241-6868 --label name.minikube.sigs.k8s.io=kindnet-20220207194241-6868 --label created_by.minikube.sigs.k8s.io=true
	I0207 19:56:57.667548  329832 oci.go:102] Successfully created a docker volume kindnet-20220207194241-6868
	I0207 19:56:57.667652  329832 cli_runner.go:133] Run: docker run --rm --name kindnet-20220207194241-6868-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220207194241-6868 --entrypoint /usr/bin/test -v kindnet-20220207194241-6868:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -d /var/lib
	I0207 19:56:58.345686  329832 oci.go:106] Successfully prepared a docker volume kindnet-20220207194241-6868
	I0207 19:56:58.345738  329832 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 19:56:58.345776  329832 kic.go:179] Starting extracting preloaded images to volume ...
	I0207 19:56:58.345843  329832 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20220207194241-6868:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0207 19:57:04.562924  329832 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20220207194241-6868:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 -I lz4 -xf /preloaded.tar -C /extractDir: (6.217035646s)
	I0207 19:57:04.562961  329832 kic.go:188] duration metric: took 6.217182 seconds to extract preloaded images to volume
	W0207 19:57:04.563013  329832 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0207 19:57:04.563028  329832 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0207 19:57:04.563086  329832 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0207 19:57:04.677428  329832 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-20220207194241-6868 --name kindnet-20220207194241-6868 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220207194241-6868 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-20220207194241-6868 --network kindnet-20220207194241-6868 --ip 192.168.76.2 --volume kindnet-20220207194241-6868:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8
	I0207 19:57:05.230611  329832 cli_runner.go:133] Run: docker container inspect kindnet-20220207194241-6868 --format={{.State.Running}}
	I0207 19:57:05.279386  329832 cli_runner.go:133] Run: docker container inspect kindnet-20220207194241-6868 --format={{.State.Status}}
	I0207 19:57:05.324173  329832 cli_runner.go:133] Run: docker exec kindnet-20220207194241-6868 stat /var/lib/dpkg/alternatives/iptables
	I0207 19:57:05.433591  329832 oci.go:281] the created container "kindnet-20220207194241-6868" has a running status.
	I0207 19:57:05.433630  329832 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/kindnet-20220207194241-6868/id_rsa...
	I0207 19:57:05.542847  329832 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/kindnet-20220207194241-6868/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0207 19:57:05.662667  329832 cli_runner.go:133] Run: docker container inspect kindnet-20220207194241-6868 --format={{.State.Status}}
	I0207 19:57:05.705864  329832 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0207 19:57:05.705897  329832 kic_runner.go:114] Args: [docker exec --privileged kindnet-20220207194241-6868 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0207 19:57:05.811564  329832 cli_runner.go:133] Run: docker container inspect kindnet-20220207194241-6868 --format={{.State.Status}}
	I0207 19:57:05.854506  329832 machine.go:88] provisioning docker machine ...
	I0207 19:57:05.854546  329832 ubuntu.go:169] provisioning hostname "kindnet-20220207194241-6868"
	I0207 19:57:05.854614  329832 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220207194241-6868
	I0207 19:57:05.906747  329832 main.go:130] libmachine: Using SSH client type: native
	I0207 19:57:05.906986  329832 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil>  [] 0s} 127.0.0.1 49434 <nil> <nil>}
	I0207 19:57:05.907012  329832 main.go:130] libmachine: About to run SSH command:
	sudo hostname kindnet-20220207194241-6868 && echo "kindnet-20220207194241-6868" | sudo tee /etc/hostname
	I0207 19:57:06.085213  329832 main.go:130] libmachine: SSH cmd err, output: <nil>: kindnet-20220207194241-6868
	
	I0207 19:57:06.085294  329832 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220207194241-6868
	I0207 19:57:06.126440  329832 main.go:130] libmachine: Using SSH client type: native
	I0207 19:57:06.126683  329832 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil>  [] 0s} 127.0.0.1 49434 <nil> <nil>}
	I0207 19:57:06.126706  329832 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-20220207194241-6868' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-20220207194241-6868/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-20220207194241-6868' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0207 19:57:06.259591  329832 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0207 19:57:06.259625  329832 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube}
	I0207 19:57:06.259662  329832 ubuntu.go:177] setting up certificates
	I0207 19:57:06.259675  329832 provision.go:83] configureAuth start
	I0207 19:57:06.259731  329832 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220207194241-6868
	I0207 19:57:06.305247  329832 provision.go:138] copyHostCerts
	I0207 19:57:06.305313  329832 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/key.pem, removing ...
	I0207 19:57:06.305323  329832 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/key.pem
	I0207 19:57:06.305408  329832 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/key.pem (1675 bytes)
	I0207 19:57:06.305515  329832 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/ca.pem, removing ...
	I0207 19:57:06.305527  329832 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/ca.pem
	I0207 19:57:06.305563  329832 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/ca.pem (1078 bytes)
	I0207 19:57:06.305639  329832 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cert.pem, removing ...
	I0207 19:57:06.305656  329832 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cert.pem
	I0207 19:57:06.305689  329832 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cert.pem (1123 bytes)
	I0207 19:57:06.305746  329832 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca-key.pem org=jenkins.kindnet-20220207194241-6868 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kindnet-20220207194241-6868]
	I0207 19:57:06.505357  329832 provision.go:172] copyRemoteCerts
	I0207 19:57:06.505424  329832 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0207 19:57:06.505458  329832 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220207194241-6868
	I0207 19:57:06.545511  329832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49434 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/kindnet-20220207194241-6868/id_rsa Username:docker}
	I0207 19:57:06.644664  329832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0207 19:57:06.676035  329832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0207 19:57:06.710249  329832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0207 19:57:06.732330  329832 provision.go:86] duration metric: configureAuth took 472.639008ms
	I0207 19:57:06.732398  329832 ubuntu.go:193] setting minikube options for container-runtime
	I0207 19:57:06.732574  329832 config.go:176] Loaded profile config "kindnet-20220207194241-6868": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 19:57:06.732620  329832 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220207194241-6868
	I0207 19:57:06.784790  329832 main.go:130] libmachine: Using SSH client type: native
	I0207 19:57:06.784990  329832 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil>  [] 0s} 127.0.0.1 49434 <nil> <nil>}
	I0207 19:57:06.785013  329832 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0207 19:57:06.923454  329832 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0207 19:57:06.923488  329832 ubuntu.go:71] root file system type: overlay
	I0207 19:57:06.923633  329832 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0207 19:57:06.923687  329832 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220207194241-6868
	I0207 19:57:06.967489  329832 main.go:130] libmachine: Using SSH client type: native
	I0207 19:57:06.967681  329832 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil>  [] 0s} 127.0.0.1 49434 <nil> <nil>}
	I0207 19:57:06.967819  329832 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0207 19:57:07.113954  329832 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0207 19:57:07.114028  329832 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220207194241-6868
	I0207 19:57:07.153797  329832 main.go:130] libmachine: Using SSH client type: native
	I0207 19:57:07.154008  329832 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil>  [] 0s} 127.0.0.1 49434 <nil> <nil>}
	I0207 19:57:07.154041  329832 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0207 19:57:07.911222  329832 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-12-13 11:43:42.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-02-07 19:57:07.109507236 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0207 19:57:07.911263  329832 machine.go:91] provisioned docker machine in 2.056731427s
	I0207 19:57:07.911274  329832 client.go:171] LocalClient.Create took 10.5909737s
	I0207 19:57:07.911287  329832 start.go:168] duration metric: libmachine.API.Create for "kindnet-20220207194241-6868" took 10.591045346s
	I0207 19:57:07.911295  329832 start.go:267] post-start starting for "kindnet-20220207194241-6868" (driver="docker")
	I0207 19:57:07.911301  329832 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0207 19:57:07.911367  329832 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0207 19:57:07.911404  329832 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220207194241-6868
	I0207 19:57:07.952541  329832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49434 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/kindnet-20220207194241-6868/id_rsa Username:docker}
	I0207 19:57:08.057134  329832 ssh_runner.go:195] Run: cat /etc/os-release
	I0207 19:57:08.060796  329832 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0207 19:57:08.060818  329832 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0207 19:57:08.060826  329832 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0207 19:57:08.060831  329832 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0207 19:57:08.060840  329832 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/addons for local assets ...
	I0207 19:57:08.060892  329832 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/files for local assets ...
	I0207 19:57:08.060979  329832 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/files/etc/ssl/certs/68682.pem -> 68682.pem in /etc/ssl/certs
	I0207 19:57:08.061092  329832 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0207 19:57:08.070479  329832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/files/etc/ssl/certs/68682.pem --> /etc/ssl/certs/68682.pem (1708 bytes)
	I0207 19:57:08.100436  329832 start.go:270] post-start completed in 189.125825ms
	I0207 19:57:08.100926  329832 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220207194241-6868
	I0207 19:57:08.142768  329832 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/kindnet-20220207194241-6868/config.json ...
	I0207 19:57:08.143058  329832 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0207 19:57:08.143112  329832 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220207194241-6868
	I0207 19:57:08.193097  329832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49434 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/kindnet-20220207194241-6868/id_rsa Username:docker}
	I0207 19:57:08.287989  329832 start.go:129] duration metric: createHost completed in 10.971015618s
	I0207 19:57:08.288020  329832 start.go:80] releasing machines lock for "kindnet-20220207194241-6868", held for 10.971217018s
	I0207 19:57:08.288115  329832 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220207194241-6868
	I0207 19:57:08.328335  329832 ssh_runner.go:195] Run: systemctl --version
	I0207 19:57:08.328390  329832 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0207 19:57:08.328398  329832 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220207194241-6868
	I0207 19:57:08.328451  329832 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220207194241-6868
	I0207 19:57:08.372476  329832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49434 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/kindnet-20220207194241-6868/id_rsa Username:docker}
	I0207 19:57:08.387028  329832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49434 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/kindnet-20220207194241-6868/id_rsa Username:docker}
	I0207 19:57:08.498439  329832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0207 19:57:08.510584  329832 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0207 19:57:08.520816  329832 cruntime.go:272] skipping containerd shutdown because we are bound to it
	I0207 19:57:08.520902  329832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0207 19:57:08.530952  329832 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0207 19:57:08.546123  329832 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0207 19:57:08.648438  329832 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0207 19:57:08.731317  329832 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0207 19:57:08.741583  329832 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0207 19:57:08.824113  329832 ssh_runner.go:195] Run: sudo systemctl start docker
	I0207 19:57:08.834556  329832 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0207 19:57:08.881767  329832 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0207 19:57:08.941808  329832 out.go:203] * Preparing Kubernetes v1.23.3 on Docker 20.10.12 ...
	I0207 19:57:08.941934  329832 cli_runner.go:133] Run: docker network inspect kindnet-20220207194241-6868 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0207 19:57:08.988751  329832 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0207 19:57:08.993410  329832 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0207 19:57:09.008884  329832 out.go:176]   - kubelet.housekeeping-interval=5m
	I0207 19:57:09.010802  329832 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0207 19:57:09.010916  329832 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 19:57:09.010986  329832 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0207 19:57:09.060482  329832 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.3
	k8s.gcr.io/kube-proxy:v1.23.3
	k8s.gcr.io/kube-controller-manager:v1.23.3
	k8s.gcr.io/kube-scheduler:v1.23.3
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0207 19:57:09.060538  329832 docker.go:537] Images already preloaded, skipping extraction
	I0207 19:57:09.060668  329832 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0207 19:57:09.106820  329832 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.3
	k8s.gcr.io/kube-proxy:v1.23.3
	k8s.gcr.io/kube-scheduler:v1.23.3
	k8s.gcr.io/kube-controller-manager:v1.23.3
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0207 19:57:09.106853  329832 cache_images.go:84] Images are preloaded, skipping loading
	I0207 19:57:09.106909  329832 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0207 19:57:09.209787  329832 cni.go:93] Creating CNI manager for "kindnet"
	I0207 19:57:09.209817  329832 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0207 19:57:09.209836  329832 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.23.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-20220207194241-6868 NodeName:kindnet-20220207194241-6868 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/
minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0207 19:57:09.209987  329832 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kindnet-20220207194241-6868"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0207 19:57:09.210081  329832 kubeadm.go:935] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kindnet-20220207194241-6868 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.3 ClusterName:kindnet-20220207194241-6868 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:}
	I0207 19:57:09.210140  329832 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.3
	I0207 19:57:09.217819  329832 binaries.go:44] Found k8s binaries, skipping transfer
	I0207 19:57:09.217903  329832 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0207 19:57:09.225201  329832 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0207 19:57:09.239853  329832 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0207 19:57:09.256955  329832 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2049 bytes)
	I0207 19:57:09.275739  329832 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0207 19:57:09.279848  329832 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0207 19:57:09.291182  329832 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/kindnet-20220207194241-6868 for IP: 192.168.76.2
	I0207 19:57:09.291311  329832 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/ca.key
	I0207 19:57:09.291364  329832 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/proxy-client-ca.key
	I0207 19:57:09.291427  329832 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/kindnet-20220207194241-6868/client.key
	I0207 19:57:09.291451  329832 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/kindnet-20220207194241-6868/client.crt with IP's: []
	I0207 19:57:09.352267  329832 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/kindnet-20220207194241-6868/client.crt ...
	I0207 19:57:09.352308  329832 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/kindnet-20220207194241-6868/client.crt: {Name:mkfbf53f522782723c8d9e829bceb244c3c398cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:57:09.352545  329832 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/kindnet-20220207194241-6868/client.key ...
	I0207 19:57:09.352569  329832 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/kindnet-20220207194241-6868/client.key: {Name:mk9ea33dc7a51f50151d3907322eec63202eaa7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:57:09.352714  329832 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/kindnet-20220207194241-6868/apiserver.key.31bdca25
	I0207 19:57:09.352737  329832 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/kindnet-20220207194241-6868/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0207 19:57:09.512356  329832 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/kindnet-20220207194241-6868/apiserver.crt.31bdca25 ...
	I0207 19:57:09.512398  329832 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/kindnet-20220207194241-6868/apiserver.crt.31bdca25: {Name:mk9ecba882ac5558fe08ac1a349bf86e3406dff9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:57:09.512617  329832 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/kindnet-20220207194241-6868/apiserver.key.31bdca25 ...
	I0207 19:57:09.512635  329832 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/kindnet-20220207194241-6868/apiserver.key.31bdca25: {Name:mk71167c1585956d8474cef8d789d5b8905ab089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:57:09.512746  329832 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/kindnet-20220207194241-6868/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/kindnet-20220207194241-6868/apiserver.crt
	I0207 19:57:09.512822  329832 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/kindnet-20220207194241-6868/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/kindnet-20220207194241-6868/apiserver.key
	I0207 19:57:09.512878  329832 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/kindnet-20220207194241-6868/proxy-client.key
	I0207 19:57:09.512894  329832 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/kindnet-20220207194241-6868/proxy-client.crt with IP's: []
	I0207 19:57:09.694963  329832 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/kindnet-20220207194241-6868/proxy-client.crt ...
	I0207 19:57:09.694996  329832 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/kindnet-20220207194241-6868/proxy-client.crt: {Name:mk39bee48be73d5e93578032095745314c29779f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:57:09.695222  329832 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/kindnet-20220207194241-6868/proxy-client.key ...
	I0207 19:57:09.695240  329832 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/kindnet-20220207194241-6868/proxy-client.key: {Name:mkf3185f5003e7e3de6fe5f2ac091749e61d8b30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:57:09.695467  329832 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/6868.pem (1338 bytes)
	W0207 19:57:09.695512  329832 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/6868_empty.pem, impossibly tiny 0 bytes
	I0207 19:57:09.695530  329832 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca-key.pem (1675 bytes)
	I0207 19:57:09.695560  329832 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/ca.pem (1078 bytes)
	I0207 19:57:09.695594  329832 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/cert.pem (1123 bytes)
	I0207 19:57:09.695632  329832 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/key.pem (1675 bytes)
	I0207 19:57:09.695714  329832 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/files/etc/ssl/certs/68682.pem (1708 bytes)
	I0207 19:57:09.696588  329832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/kindnet-20220207194241-6868/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0207 19:57:09.715965  329832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/kindnet-20220207194241-6868/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0207 19:57:09.734503  329832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/kindnet-20220207194241-6868/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0207 19:57:09.752876  329832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/kindnet-20220207194241-6868/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0207 19:57:09.772419  329832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0207 19:57:09.791367  329832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0207 19:57:09.811410  329832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0207 19:57:09.886881  329832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0207 19:57:09.908213  329832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/certs/6868.pem --> /usr/share/ca-certificates/6868.pem (1338 bytes)
	I0207 19:57:09.932416  329832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/files/etc/ssl/certs/68682.pem --> /usr/share/ca-certificates/68682.pem (1708 bytes)
	I0207 19:57:09.955466  329832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0207 19:57:09.981820  329832 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0207 19:57:09.998329  329832 ssh_runner.go:195] Run: openssl version
	I0207 19:57:10.004321  329832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/68682.pem && ln -fs /usr/share/ca-certificates/68682.pem /etc/ssl/certs/68682.pem"
	I0207 19:57:10.014760  329832 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/68682.pem
	I0207 19:57:10.018434  329832 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb  7 19:21 /usr/share/ca-certificates/68682.pem
	I0207 19:57:10.018493  329832 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/68682.pem
	I0207 19:57:10.023660  329832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/68682.pem /etc/ssl/certs/3ec20f2e.0"
	I0207 19:57:10.031751  329832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0207 19:57:10.039851  329832 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0207 19:57:10.044016  329832 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb  7 19:17 /usr/share/ca-certificates/minikubeCA.pem
	I0207 19:57:10.044086  329832 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0207 19:57:10.051867  329832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0207 19:57:10.062208  329832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6868.pem && ln -fs /usr/share/ca-certificates/6868.pem /etc/ssl/certs/6868.pem"
	I0207 19:57:10.071327  329832 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6868.pem
	I0207 19:57:10.076043  329832 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb  7 19:21 /usr/share/ca-certificates/6868.pem
	I0207 19:57:10.076107  329832 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6868.pem
	I0207 19:57:10.082191  329832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6868.pem /etc/ssl/certs/51391683.0"
	I0207 19:57:10.093825  329832 kubeadm.go:390] StartCluster: {Name:kindnet-20220207194241-6868 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:kindnet-20220207194241-6868 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0207 19:57:10.093989  329832 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0207 19:57:10.134407  329832 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0207 19:57:10.143433  329832 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0207 19:57:10.152561  329832 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0207 19:57:10.152622  329832 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0207 19:57:10.160848  329832 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0207 19:57:10.160891  329832 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0207 19:57:10.793517  329832 out.go:203]   - Generating certificates and keys ...
	I0207 19:57:13.860981  329832 out.go:203]   - Booting up control plane ...
	I0207 19:57:22.423647  329832 out.go:203]   - Configuring RBAC rules ...
	I0207 19:57:22.843290  329832 cni.go:93] Creating CNI manager for "kindnet"
	I0207 19:57:22.844978  329832 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0207 19:57:22.845042  329832 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0207 19:57:22.849626  329832 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.3/kubectl ...
	I0207 19:57:22.849650  329832 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0207 19:57:22.867890  329832 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0207 19:57:24.204120  329832 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.336177969s)
	I0207 19:57:24.204182  329832 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0207 19:57:24.204313  329832 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:57:24.204396  329832 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=68b41900649d825bc98a620f335c8941b16741bb minikube.k8s.io/name=kindnet-20220207194241-6868 minikube.k8s.io/updated_at=2022_02_07T19_57_24_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:57:24.305379  329832 ops.go:34] apiserver oom_adj: -16
	I0207 19:57:24.305486  329832 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:57:24.874499  329832 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:57:25.374489  329832 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:57:25.874631  329832 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:57:26.374195  329832 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:57:26.874267  329832 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:57:27.374845  329832 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:57:27.874035  329832 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:57:28.374619  329832 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:57:28.873912  329832 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:57:29.373898  329832 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:57:29.873850  329832 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:57:30.374305  329832 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:57:30.873841  329832 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:57:31.374454  329832 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:57:31.874495  329832 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:57:32.373918  329832 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:57:32.874610  329832 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:57:33.374823  329832 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:57:33.873949  329832 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:57:34.374202  329832 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:57:34.873902  329832 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:57:35.374700  329832 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:57:35.874296  329832 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0207 19:57:35.966395  329832 kubeadm.go:1019] duration metric: took 11.762129204s to wait for elevateKubeSystemPrivileges.
	I0207 19:57:35.966433  329832 kubeadm.go:392] StartCluster complete in 25.87261782s
	I0207 19:57:35.966462  329832 settings.go:142] acquiring lock: {Name:mk7529dd3428fdf27408cc6b278cb5c7b03413f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:57:35.966589  329832 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/kubeconfig
	I0207 19:57:35.969159  329832 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/kubeconfig: {Name:mkd7bc53058a925fccbecd7920bc22204f3abc89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0207 19:57:36.497817  329832 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kindnet-20220207194241-6868" rescaled to 1
	I0207 19:57:36.497873  329832 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0207 19:57:36.500322  329832 out.go:176] * Verifying Kubernetes components...
	I0207 19:57:36.500378  329832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0207 19:57:36.497938  329832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0207 19:57:36.497952  329832 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0207 19:57:36.498134  329832 config.go:176] Loaded profile config "kindnet-20220207194241-6868": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 19:57:36.500478  329832 addons.go:65] Setting default-storageclass=true in profile "kindnet-20220207194241-6868"
	I0207 19:57:36.500498  329832 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-20220207194241-6868"
	I0207 19:57:36.500461  329832 addons.go:65] Setting storage-provisioner=true in profile "kindnet-20220207194241-6868"
	I0207 19:57:36.500605  329832 addons.go:153] Setting addon storage-provisioner=true in "kindnet-20220207194241-6868"
	W0207 19:57:36.500616  329832 addons.go:165] addon storage-provisioner should already be in state true
	I0207 19:57:36.500649  329832 host.go:66] Checking if "kindnet-20220207194241-6868" exists ...
	I0207 19:57:36.500895  329832 cli_runner.go:133] Run: docker container inspect kindnet-20220207194241-6868 --format={{.State.Status}}
	I0207 19:57:36.501020  329832 cli_runner.go:133] Run: docker container inspect kindnet-20220207194241-6868 --format={{.State.Status}}
	I0207 19:57:36.551558  329832 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0207 19:57:36.551706  329832 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0207 19:57:36.551725  329832 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0207 19:57:36.551786  329832 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220207194241-6868
	I0207 19:57:36.560893  329832 addons.go:153] Setting addon default-storageclass=true in "kindnet-20220207194241-6868"
	W0207 19:57:36.560926  329832 addons.go:165] addon default-storageclass should already be in state true
	I0207 19:57:36.560957  329832 host.go:66] Checking if "kindnet-20220207194241-6868" exists ...
	I0207 19:57:36.561458  329832 cli_runner.go:133] Run: docker container inspect kindnet-20220207194241-6868 --format={{.State.Status}}
	I0207 19:57:36.618723  329832 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0207 19:57:36.619284  329832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49434 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/kindnet-20220207194241-6868/id_rsa Username:docker}
	I0207 19:57:36.621050  329832 node_ready.go:35] waiting up to 5m0s for node "kindnet-20220207194241-6868" to be "Ready" ...
	I0207 19:57:36.632544  329832 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0207 19:57:36.632568  329832 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0207 19:57:36.632616  329832 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220207194241-6868
	I0207 19:57:36.690096  329832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49434 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/kindnet-20220207194241-6868/id_rsa Username:docker}
	I0207 19:57:36.772816  329832 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0207 19:57:36.957408  329832 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0207 19:57:37.137906  329832 start.go:777] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0207 19:57:37.397787  329832 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0207 19:57:37.397821  329832 addons.go:417] enableAddons completed in 899.886057ms
	I0207 19:57:38.628636  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:57:40.640424  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:57:43.129309  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:57:45.629324  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:57:48.128846  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:57:50.629117  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:57:53.128865  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:57:55.628970  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:57:57.629115  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:57:59.629506  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:58:02.129265  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:58:04.129686  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:58:06.629687  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:58:09.129459  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:58:11.628903  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:58:13.629112  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:58:16.129553  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:58:18.629402  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:58:20.629783  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:58:23.129235  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:58:25.629710  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:58:27.629760  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:58:30.128921  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:58:32.628816  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:58:34.629452  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:58:37.129045  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:58:39.628797  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:58:41.629347  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:58:44.129202  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:58:46.129302  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:58:48.629265  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:58:51.129193  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:58:53.129344  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:58:55.629082  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:58:58.129367  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:59:00.629042  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:59:03.129005  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:59:05.629193  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:59:07.629745  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:59:10.129135  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:59:12.629602  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:59:15.130159  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:59:17.629408  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:59:20.128579  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:59:22.128784  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:59:24.128817  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:59:26.129430  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:59:28.628670  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:59:30.628798  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:59:32.630017  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:59:35.129185  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:59:37.629726  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:59:40.129006  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:59:42.129315  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:59:44.628482  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:59:46.628826  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:59:49.129503  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:59:51.629742  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:59:54.128679  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:59:56.128957  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 19:59:58.629378  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:00:01.129117  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:00:03.629569  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:00:06.129059  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:00:08.629759  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:00:11.128502  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:00:13.628747  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:00:15.629108  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:00:17.629462  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:00:20.129447  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:00:22.629062  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:00:25.128767  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:00:27.628437  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:00:29.629040  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:00:31.629501  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:00:33.629550  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:00:36.128979  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:00:38.129301  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:00:40.629373  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:00:43.129358  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:00:45.628871  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:00:47.629579  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:00:50.128414  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:00:52.128693  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:00:54.129326  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:00:56.629064  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:00:58.629500  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:01:01.128858  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:01:03.129287  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:01:05.629023  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:01:07.629388  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:01:10.128910  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:01:12.628919  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:01:14.629299  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:01:17.129180  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:01:19.628658  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:01:21.629201  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:01:24.129472  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:01:26.629205  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:01:29.128633  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:01:31.628777  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:01:33.629534  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:01:36.129648  329832 node_ready.go:58] node "kindnet-20220207194241-6868" has status "Ready":"False"
	I0207 20:01:36.631242  329832 node_ready.go:38] duration metric: took 4m0.010147911s waiting for node "kindnet-20220207194241-6868" to be "Ready" ...
	I0207 20:01:36.634034  329832 out.go:176] 
	W0207 20:01:36.634215  329832 out.go:241] X Exiting due to GUEST_START: wait 5m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 5m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0207 20:01:36.634233  329832 out.go:241] * 
	* 
	W0207 20:01:36.635030  329832 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0207 20:01:36.637378  329832 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (279.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (351.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.169310145s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.177716602s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default
E0207 19:58:11.853644    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/functional-20220207192144-6868/client.crt: no such file or directory
E0207 19:58:19.323925    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/no-preload-20220207194713-6868/client.crt: no such file or directory
E0207 19:58:19.329210    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/no-preload-20220207194713-6868/client.crt: no such file or directory
E0207 19:58:19.339541    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/no-preload-20220207194713-6868/client.crt: no such file or directory
E0207 19:58:19.359869    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/no-preload-20220207194713-6868/client.crt: no such file or directory
E0207 19:58:19.400131    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/no-preload-20220207194713-6868/client.crt: no such file or directory
E0207 19:58:19.480468    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/no-preload-20220207194713-6868/client.crt: no such file or directory
E0207 19:58:19.640840    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/no-preload-20220207194713-6868/client.crt: no such file or directory
E0207 19:58:19.961409    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/no-preload-20220207194713-6868/client.crt: no such file or directory
E0207 19:58:20.602386    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/no-preload-20220207194713-6868/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.152476573s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0207 19:58:21.883562    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/no-preload-20220207194713-6868/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default
E0207 19:58:24.444168    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/no-preload-20220207194713-6868/client.crt: no such file or directory
E0207 19:58:29.565066    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/no-preload-20220207194713-6868/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.281773529s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0207 19:58:39.805934    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/no-preload-20220207194713-6868/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.166971456s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0207 19:58:57.957376    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/default-k8s-different-port-20220207194800-6868/client.crt: no such file or directory
E0207 19:58:57.962705    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/default-k8s-different-port-20220207194800-6868/client.crt: no such file or directory
E0207 19:58:57.972994    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/default-k8s-different-port-20220207194800-6868/client.crt: no such file or directory
E0207 19:58:57.993302    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/default-k8s-different-port-20220207194800-6868/client.crt: no such file or directory
E0207 19:58:58.033623    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/default-k8s-different-port-20220207194800-6868/client.crt: no such file or directory
E0207 19:58:58.113892    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/default-k8s-different-port-20220207194800-6868/client.crt: no such file or directory
E0207 19:58:58.274744    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/default-k8s-different-port-20220207194800-6868/client.crt: no such file or directory
E0207 19:58:58.595293    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/default-k8s-different-port-20220207194800-6868/client.crt: no such file or directory
E0207 19:58:59.236089    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/default-k8s-different-port-20220207194800-6868/client.crt: no such file or directory
E0207 19:59:00.286773    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/no-preload-20220207194713-6868/client.crt: no such file or directory
E0207 19:59:00.517175    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/default-k8s-different-port-20220207194800-6868/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default
E0207 19:59:03.077999    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/default-k8s-different-port-20220207194800-6868/client.crt: no such file or directory
E0207 19:59:08.198703    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/default-k8s-different-port-20220207194800-6868/client.crt: no such file or directory
E0207 19:59:16.238005    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/addons-20220207191713-6868/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.162344684s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0207 19:59:18.439232    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/default-k8s-different-port-20220207194800-6868/client.crt: no such file or directory
E0207 19:59:22.556473    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/auto-20220207194241-6868/client.crt: no such file or directory
E0207 19:59:22.561785    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/auto-20220207194241-6868/client.crt: no such file or directory
E0207 19:59:22.572061    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/auto-20220207194241-6868/client.crt: no such file or directory
E0207 19:59:22.592326    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/auto-20220207194241-6868/client.crt: no such file or directory
E0207 19:59:22.632616    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/auto-20220207194241-6868/client.crt: no such file or directory
E0207 19:59:22.712984    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/auto-20220207194241-6868/client.crt: no such file or directory
E0207 19:59:22.873221    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/auto-20220207194241-6868/client.crt: no such file or directory
E0207 19:59:23.193737    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/auto-20220207194241-6868/client.crt: no such file or directory
E0207 19:59:23.834298    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/auto-20220207194241-6868/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default
E0207 19:59:25.114496    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/auto-20220207194241-6868/client.crt: no such file or directory
E0207 19:59:27.674800    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/auto-20220207194241-6868/client.crt: no such file or directory
E0207 19:59:32.795180    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/auto-20220207194241-6868/client.crt: no such file or directory
E0207 19:59:38.920281    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/default-k8s-different-port-20220207194800-6868/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.154483644s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0207 19:59:41.247575    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/no-preload-20220207194713-6868/client.crt: no such file or directory
E0207 19:59:43.035402    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/auto-20220207194241-6868/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default
E0207 19:59:55.753980    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/skaffold-20220207193855-6868/client.crt: no such file or directory
E0207 20:00:03.515783    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/auto-20220207194241-6868/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.128905543s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0207 20:00:12.696698    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/ingress-addon-legacy-20220207192354-6868/client.crt: no such file or directory
E0207 20:00:19.880597    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/default-k8s-different-port-20220207194800-6868/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default
E0207 20:00:36.535050    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/false-20220207194241-6868/client.crt: no such file or directory
E0207 20:00:36.540294    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/false-20220207194241-6868/client.crt: no such file or directory
E0207 20:00:36.550566    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/false-20220207194241-6868/client.crt: no such file or directory
E0207 20:00:36.570735    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/false-20220207194241-6868/client.crt: no such file or directory
E0207 20:00:36.611003    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/false-20220207194241-6868/client.crt: no such file or directory
E0207 20:00:36.691333    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/false-20220207194241-6868/client.crt: no such file or directory
E0207 20:00:36.852332    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/false-20220207194241-6868/client.crt: no such file or directory
E0207 20:00:37.172866    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/false-20220207194241-6868/client.crt: no such file or directory
E0207 20:00:37.813452    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/false-20220207194241-6868/client.crt: no such file or directory
E0207 20:00:39.093745    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/false-20220207194241-6868/client.crt: no such file or directory
E0207 20:00:41.654803    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/false-20220207194241-6868/client.crt: no such file or directory
E0207 20:00:44.475987    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/auto-20220207194241-6868/client.crt: no such file or directory
E0207 20:00:46.775927    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/false-20220207194241-6868/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.147740943s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0207 20:00:57.016839    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/false-20220207194241-6868/client.crt: no such file or directory
E0207 20:01:03.168648    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/no-preload-20220207194713-6868/client.crt: no such file or directory
E0207 20:01:14.900570    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/functional-20220207192144-6868/client.crt: no such file or directory
E0207 20:01:16.134227    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/cilium-20220207194241-6868/client.crt: no such file or directory
E0207 20:01:16.139523    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/cilium-20220207194241-6868/client.crt: no such file or directory
E0207 20:01:16.149794    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/cilium-20220207194241-6868/client.crt: no such file or directory
E0207 20:01:16.170093    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/cilium-20220207194241-6868/client.crt: no such file or directory
E0207 20:01:16.210401    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/cilium-20220207194241-6868/client.crt: no such file or directory
E0207 20:01:16.290682    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/cilium-20220207194241-6868/client.crt: no such file or directory
E0207 20:01:16.451050    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/cilium-20220207194241-6868/client.crt: no such file or directory
E0207 20:01:16.771519    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/cilium-20220207194241-6868/client.crt: no such file or directory
E0207 20:01:17.411723    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/cilium-20220207194241-6868/client.crt: no such file or directory
E0207 20:01:17.497957    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/false-20220207194241-6868/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default
E0207 20:01:18.692613    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/cilium-20220207194241-6868/client.crt: no such file or directory
E0207 20:01:18.799767    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/skaffold-20220207193855-6868/client.crt: no such file or directory
E0207 20:01:21.253440    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/cilium-20220207194241-6868/client.crt: no such file or directory
E0207 20:01:26.373709    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/cilium-20220207194241-6868/client.crt: no such file or directory
E0207 20:01:31.725239    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/old-k8s-version-20220207194436-6868/client.crt: no such file or directory
E0207 20:01:31.730503    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/old-k8s-version-20220207194436-6868/client.crt: no such file or directory
E0207 20:01:31.740794    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/old-k8s-version-20220207194436-6868/client.crt: no such file or directory
E0207 20:01:31.761103    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/old-k8s-version-20220207194436-6868/client.crt: no such file or directory
E0207 20:01:31.801381    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/old-k8s-version-20220207194436-6868/client.crt: no such file or directory
E0207 20:01:31.881844    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/old-k8s-version-20220207194436-6868/client.crt: no such file or directory
E0207 20:01:32.042306    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/old-k8s-version-20220207194436-6868/client.crt: no such file or directory
E0207 20:01:32.362862    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/old-k8s-version-20220207194436-6868/client.crt: no such file or directory
E0207 20:01:33.003176    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/old-k8s-version-20220207194436-6868/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.12473512s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0207 20:01:34.283676    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/old-k8s-version-20220207194436-6868/client.crt: no such file or directory
E0207 20:01:36.614755    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/cilium-20220207194241-6868/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default
E0207 20:01:58.458242    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/false-20220207194241-6868/client.crt: no such file or directory
E0207 20:02:06.397057    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/auto-20220207194241-6868/client.crt: no such file or directory
E0207 20:02:12.686734    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/old-k8s-version-20220207194436-6868/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.173930455s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0207 20:02:38.055571    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/cilium-20220207194241-6868/client.crt: no such file or directory
E0207 20:02:53.646909    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/old-k8s-version-20220207194436-6868/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default
E0207 20:03:11.853121    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/functional-20220207192144-6868/client.crt: no such file or directory
E0207 20:03:15.744556    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/ingress-addon-legacy-20220207192354-6868/client.crt: no such file or directory
E0207 20:03:19.324522    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/no-preload-20220207194713-6868/client.crt: no such file or directory
E0207 20:03:20.378474    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/false-20220207194241-6868/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.130417313s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/enable-default-cni/DNS (351.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (354.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default
E0207 20:04:22.556266    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/auto-20220207194241-6868/client.crt: no such file or directory
E0207 20:04:25.642416    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/default-k8s-different-port-20220207194800-6868/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.150749832s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.132311445s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default
E0207 20:04:55.753755    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/skaffold-20220207193855-6868/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.134458758s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0207 20:05:12.697064    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/ingress-addon-legacy-20220207192354-6868/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.12966114s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default
E0207 20:05:36.535203    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/false-20220207194241-6868/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.137169282s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default
E0207 20:06:04.219557    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/false-20220207194241-6868/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.131837181s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0207 20:06:16.134543    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/cilium-20220207194241-6868/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.137102179s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0207 20:06:31.724725    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/old-k8s-version-20220207194436-6868/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default
E0207 20:06:43.817106    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/cilium-20220207194241-6868/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.147072385s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0207 20:06:59.408265    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/old-k8s-version-20220207194436-6868/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.129411609s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0207 20:07:28.318834    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/enable-default-cni-20220207194241-6868/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default
E0207 20:08:04.160377    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/enable-default-cni-20220207194241-6868/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.122010251s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0207 20:08:19.324013    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/no-preload-20220207194713-6868/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default
E0207 20:09:16.238231    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/addons-20220207191713-6868/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.229357728s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.146358352s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/kubenet/DNS (354.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (368.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.193101559s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.245327389s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default
E0207 20:07:23.199359    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/enable-default-cni-20220207194241-6868/client.crt: no such file or directory
E0207 20:07:23.204610    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/enable-default-cni-20220207194241-6868/client.crt: no such file or directory
E0207 20:07:23.214824    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/enable-default-cni-20220207194241-6868/client.crt: no such file or directory
E0207 20:07:23.235057    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/enable-default-cni-20220207194241-6868/client.crt: no such file or directory
E0207 20:07:23.275272    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/enable-default-cni-20220207194241-6868/client.crt: no such file or directory
E0207 20:07:23.355515    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/enable-default-cni-20220207194241-6868/client.crt: no such file or directory
E0207 20:07:23.515880    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/enable-default-cni-20220207194241-6868/client.crt: no such file or directory
E0207 20:07:23.836684    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/enable-default-cni-20220207194241-6868/client.crt: no such file or directory
E0207 20:07:24.477566    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/enable-default-cni-20220207194241-6868/client.crt: no such file or directory
E0207 20:07:25.758097    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/enable-default-cni-20220207194241-6868/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.121277604s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0207 20:07:33.438962    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/enable-default-cni-20220207194241-6868/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default
E0207 20:07:43.679523    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/enable-default-cni-20220207194241-6868/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.115820734s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.129006807s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0207 20:08:11.853439    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/functional-20220207192144-6868/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.126538846s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default
E0207 20:08:45.120948    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/enable-default-cni-20220207194241-6868/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.129171876s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0207 20:08:57.956899    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/default-k8s-different-port-20220207194800-6868/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.118953808s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0207 20:09:22.556195    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/auto-20220207194241-6868/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.128235549s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0207 20:09:55.753890    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/skaffold-20220207193855-6868/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default
E0207 20:10:07.041586    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/enable-default-cni-20220207194241-6868/client.crt: no such file or directory
E0207 20:10:12.697479    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/ingress-addon-legacy-20220207192354-6868/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.126585343s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default
E0207 20:11:16.133928    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/cilium-20220207194241-6868/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.122776605s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0207 20:11:31.725517    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/old-k8s-version-20220207194436-6868/client.crt: no such file or directory
E0207 20:12:23.200170    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/enable-default-cni-20220207194241-6868/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default
E0207 20:12:50.881810    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/enable-default-cni-20220207194241-6868/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.155664933s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/bridge/DNS (368.28s)

                                                
                                    

Test pass (246/279)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 9.83
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.23.3/json-events 7.17
11 TestDownloadOnly/v1.23.3/preload-exists 0
15 TestDownloadOnly/v1.23.3/LogsDuration 0.08
17 TestDownloadOnly/v1.23.4-rc.0/json-events 10.61
18 TestDownloadOnly/v1.23.4-rc.0/preload-exists 0
22 TestDownloadOnly/v1.23.4-rc.0/LogsDuration 0.08
23 TestDownloadOnly/DeleteAll 0.36
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.23
25 TestDownloadOnlyKic 30.07
26 TestBinaryMirror 0.94
27 TestOffline 71.28
29 TestAddons/Setup 122.69
31 TestAddons/parallel/Registry 13.95
32 TestAddons/parallel/Ingress 21.95
33 TestAddons/parallel/MetricsServer 6.27
34 TestAddons/parallel/HelmTiller 9.35
36 TestAddons/parallel/CSI 47.94
38 TestAddons/serial/GCPAuth 39.31
39 TestAddons/StoppedEnableDisable 11.48
40 TestCertOptions 37.73
41 TestCertExpiration 221.47
42 TestDockerFlags 242.1
43 TestForceSystemdFlag 43.36
44 TestForceSystemdEnv 48.57
45 TestKVMDriverInstallOrUpdate 4.59
49 TestErrorSpam/setup 27.66
50 TestErrorSpam/start 0.96
51 TestErrorSpam/status 1.23
52 TestErrorSpam/pause 1.52
53 TestErrorSpam/unpause 1.69
54 TestErrorSpam/stop 11.03
57 TestFunctional/serial/CopySyncFile 0
58 TestFunctional/serial/StartWithProxy 43.59
59 TestFunctional/serial/AuditLog 0
60 TestFunctional/serial/SoftStart 5.13
61 TestFunctional/serial/KubeContext 0.04
62 TestFunctional/serial/KubectlGetPods 0.17
65 TestFunctional/serial/CacheCmd/cache/add_remote 3.16
66 TestFunctional/serial/CacheCmd/cache/add_local 1.69
67 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.06
68 TestFunctional/serial/CacheCmd/cache/list 0.06
69 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.46
70 TestFunctional/serial/CacheCmd/cache/cache_reload 1.91
71 TestFunctional/serial/CacheCmd/cache/delete 0.12
72 TestFunctional/serial/MinikubeKubectlCmd 0.11
73 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
74 TestFunctional/serial/ExtraConfig 28.08
75 TestFunctional/serial/ComponentHealth 0.05
76 TestFunctional/serial/LogsCmd 1.3
77 TestFunctional/serial/LogsFileCmd 1.3
79 TestFunctional/parallel/ConfigCmd 0.48
80 TestFunctional/parallel/DashboardCmd 3.86
81 TestFunctional/parallel/DryRun 0.9
82 TestFunctional/parallel/InternationalLanguage 0.25
83 TestFunctional/parallel/StatusCmd 1.49
86 TestFunctional/parallel/ServiceCmd 15.91
87 TestFunctional/parallel/AddonsCmd 0.19
88 TestFunctional/parallel/PersistentVolumeClaim 38.95
90 TestFunctional/parallel/SSHCmd 0.88
91 TestFunctional/parallel/CpCmd 1.62
92 TestFunctional/parallel/MySQL 25.95
93 TestFunctional/parallel/FileSync 0.38
94 TestFunctional/parallel/CertSync 2.15
98 TestFunctional/parallel/NodeLabels 0.05
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.37
102 TestFunctional/parallel/ProfileCmd/profile_not_create 0.58
103 TestFunctional/parallel/Version/short 0.06
104 TestFunctional/parallel/Version/components 1.49
106 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
108 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.23
109 TestFunctional/parallel/ProfileCmd/profile_list 0.53
110 TestFunctional/parallel/ProfileCmd/profile_json_output 0.47
111 TestFunctional/parallel/UpdateContextCmd/no_changes 0.24
112 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
113 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
114 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
115 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
119 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
120 TestFunctional/parallel/ImageCommands/ImageListShort 0.41
121 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
122 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
123 TestFunctional/parallel/ImageCommands/ImageListYaml 0.37
124 TestFunctional/parallel/ImageCommands/ImageBuild 3.08
125 TestFunctional/parallel/ImageCommands/Setup 2.01
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 6.68
127 TestFunctional/parallel/DockerEnv/bash 1.41
128 TestFunctional/parallel/MountCmd/any-port 9.24
129 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.61
130 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.06
132 TestFunctional/parallel/MountCmd/specific-port 2.55
133 TestFunctional/parallel/ImageCommands/ImageRemove 0.73
134 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.72
135 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.6
136 TestFunctional/delete_addon-resizer_images 0.1
137 TestFunctional/delete_my-image_image 0.03
138 TestFunctional/delete_minikube_cached_images 0.03
141 TestIngressAddonLegacy/StartLegacyK8sCluster 61.76
143 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 16.27
144 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.42
145 TestIngressAddonLegacy/serial/ValidateIngressAddons 31.09
148 TestJSONOutput/start/Command 43.11
149 TestJSONOutput/start/Audit 0
151 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
152 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
154 TestJSONOutput/pause/Command 0.66
155 TestJSONOutput/pause/Audit 0
157 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
158 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
160 TestJSONOutput/unpause/Command 0.63
161 TestJSONOutput/unpause/Audit 0
163 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
166 TestJSONOutput/stop/Command 11.1
167 TestJSONOutput/stop/Audit 0
169 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
170 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
171 TestErrorJSONOutput 0.3
173 TestKicCustomNetwork/create_custom_network 30.23
174 TestKicCustomNetwork/use_default_bridge_network 28.8
175 TestKicExistingNetwork 30.02
176 TestMainNoArgs 0.06
179 TestMountStart/serial/StartWithMountFirst 6.12
180 TestMountStart/serial/VerifyMountFirst 0.34
181 TestMountStart/serial/StartWithMountSecond 5.67
182 TestMountStart/serial/VerifyMountSecond 0.34
183 TestMountStart/serial/DeleteFirst 1.79
184 TestMountStart/serial/VerifyMountPostDelete 0.34
185 TestMountStart/serial/Stop 1.29
186 TestMountStart/serial/RestartStopped 6.8
187 TestMountStart/serial/VerifyMountPostStop 0.34
190 TestMultiNode/serial/FreshStart2Nodes 75.52
191 TestMultiNode/serial/DeployApp2Nodes 4.67
192 TestMultiNode/serial/PingHostFrom2Pods 0.89
193 TestMultiNode/serial/AddNode 28.92
194 TestMultiNode/serial/ProfileList 0.39
195 TestMultiNode/serial/CopyFile 12.51
196 TestMultiNode/serial/StopNode 2.62
197 TestMultiNode/serial/StartAfterStop 25.32
198 TestMultiNode/serial/RestartKeepsNodes 102.14
199 TestMultiNode/serial/DeleteNode 5.64
200 TestMultiNode/serial/StopMultiNode 21.99
201 TestMultiNode/serial/RestartMultiNode 85.2
202 TestMultiNode/serial/ValidateNameConflict 30.55
207 TestPreload 116.56
209 TestScheduledStopUnix 99.73
210 TestSkaffold 72.59
212 TestInsufficientStorage 15.26
213 TestRunningBinaryUpgrade 120.14
215 TestKubernetesUpgrade 181.52
216 TestMissingContainerUpgrade 137.65
219 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
224 TestStoppedBinaryUpgrade/Setup 1.22
227 TestNoKubernetes/serial/StartWithK8s 55.15
228 TestStoppedBinaryUpgrade/Upgrade 90.69
229 TestNoKubernetes/serial/StartWithStopK8s 16.1
230 TestNoKubernetes/serial/Start 10.84
231 TestNoKubernetes/serial/VerifyK8sNotRunning 0.42
232 TestNoKubernetes/serial/ProfileList 1.37
233 TestNoKubernetes/serial/Stop 1.35
234 TestNoKubernetes/serial/StartNoArgs 6.47
235 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.44
236 TestStoppedBinaryUpgrade/MinikubeLogs 1.71
249 TestPause/serial/Start 61.84
250 TestPause/serial/SecondStartNoReconfiguration 5.66
251 TestPause/serial/Pause 0.84
252 TestPause/serial/VerifyStatus 0.55
253 TestPause/serial/Unpause 0.85
254 TestPause/serial/PauseAgain 0.95
255 TestPause/serial/DeletePaused 3
256 TestPause/serial/VerifyDeletedResources 0.87
260 TestStartStop/group/embed-certs/serial/FirstStart 84.58
261 TestStartStop/group/embed-certs/serial/DeployApp 9.31
262 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.59
263 TestStartStop/group/embed-certs/serial/Stop 10.87
264 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
265 TestStartStop/group/embed-certs/serial/SecondStart 338.54
267 TestStartStop/group/no-preload/serial/FirstStart 65.81
269 TestStartStop/group/default-k8s-different-port/serial/FirstStart 57.44
273 TestStartStop/group/no-preload/serial/DeployApp 8.49
274 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.73
275 TestStartStop/group/no-preload/serial/Stop 10.91
276 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
277 TestStartStop/group/no-preload/serial/SecondStart 339.03
278 TestStartStop/group/default-k8s-different-port/serial/DeployApp 8.5
279 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.65
280 TestStartStop/group/default-k8s-different-port/serial/Stop 10.98
281 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.2
282 TestStartStop/group/default-k8s-different-port/serial/SecondStart 345.5
284 TestStartStop/group/old-k8s-version/serial/SecondStart 315.69
285 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.07
286 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.19
287 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.39
288 TestStartStop/group/embed-certs/serial/Pause 3.21
290 TestStartStop/group/newest-cni/serial/FirstStart 39.3
291 TestStartStop/group/newest-cni/serial/DeployApp 0
292 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.78
293 TestStartStop/group/newest-cni/serial/Stop 10.9
294 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
295 TestStartStop/group/newest-cni/serial/SecondStart 19.62
296 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
297 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
298 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.36
299 TestStartStop/group/newest-cni/serial/Pause 3.09
300 TestNetworkPlugins/group/auto/Start 44.35
301 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 13.02
302 TestNetworkPlugins/group/auto/KubeletFlags 0.46
303 TestNetworkPlugins/group/auto/NetCatPod 13.23
304 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
305 TestNetworkPlugins/group/auto/DNS 0.19
306 TestNetworkPlugins/group/auto/Localhost 0.18
307 TestNetworkPlugins/group/auto/HairPin 5.16
308 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.44
309 TestStartStop/group/no-preload/serial/Pause 3.79
310 TestNetworkPlugins/group/false/Start 51.9
311 TestNetworkPlugins/group/cilium/Start 91.63
312 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 5.02
313 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.08
314 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.59
315 TestStartStop/group/default-k8s-different-port/serial/Pause 3.95
317 TestNetworkPlugins/group/false/KubeletFlags 0.42
318 TestNetworkPlugins/group/false/NetCatPod 12.33
319 TestNetworkPlugins/group/false/DNS 0.18
320 TestNetworkPlugins/group/false/Localhost 0.18
321 TestNetworkPlugins/group/false/HairPin 5.18
323 TestNetworkPlugins/group/cilium/ControllerPod 5.02
324 TestNetworkPlugins/group/cilium/KubeletFlags 0.49
325 TestNetworkPlugins/group/cilium/NetCatPod 12.16
326 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
327 TestNetworkPlugins/group/cilium/DNS 0.17
328 TestNetworkPlugins/group/cilium/Localhost 0.18
329 TestNetworkPlugins/group/cilium/HairPin 0.17
330 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 8.52
331 TestNetworkPlugins/group/enable-default-cni/Start 44.8
332 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.52
335 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.47
336 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.3
338 TestNetworkPlugins/group/bridge/Start 290.99
339 TestNetworkPlugins/group/kubenet/Start 41.71
340 TestNetworkPlugins/group/kubenet/KubeletFlags 0.4
341 TestNetworkPlugins/group/kubenet/NetCatPod 10.2
343 TestNetworkPlugins/group/bridge/KubeletFlags 0.37
344 TestNetworkPlugins/group/bridge/NetCatPod 11.25
x
+
TestDownloadOnly/v1.16.0/json-events (9.83s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:73: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220207191613-6868 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:73: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220207191613-6868 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (9.826521984s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (9.83s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:175: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220207191613-6868
aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220207191613-6868: exit status 85 (81.260306ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/02/07 19:16:13
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.17.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0207 19:16:13.933655    6881 out.go:297] Setting OutFile to fd 1 ...
	I0207 19:16:13.934097    6881 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:16:13.934115    6881 out.go:310] Setting ErrFile to fd 2...
	I0207 19:16:13.934122    6881 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:16:13.934371    6881 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/bin
	W0207 19:16:13.934603    6881 root.go:293] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/config/config.json: no such file or directory
	I0207 19:16:13.935163    6881 out.go:304] Setting JSON to true
	I0207 19:16:13.936278    6881 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3530,"bootTime":1644257844,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0207 19:16:13.936365    6881 start.go:122] virtualization: kvm guest
	I0207 19:16:13.939484    6881 notify.go:174] Checking for updates...
	W0207 19:16:13.939491    6881 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cache/preloaded-tarball: no such file or directory
	I0207 19:16:13.941739    6881 driver.go:344] Setting default libvirt URI to qemu:///system
	I0207 19:16:13.984443    6881 docker.go:132] docker version: linux-20.10.12
	I0207 19:16:13.984551    6881 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 19:16:14.395557    6881 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:48 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:33 SystemTime:2022-02-07 19:16:14.013579956 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663643648 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0207 19:16:14.395711    6881 docker.go:237] overlay module found
	I0207 19:16:14.398229    6881 start.go:281] selected driver: docker
	I0207 19:16:14.398251    6881 start.go:798] validating driver "docker" against <nil>
	I0207 19:16:14.398514    6881 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 19:16:14.493599    6881 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:48 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:33 SystemTime:2022-02-07 19:16:14.427639958 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663643648 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0207 19:16:14.493761    6881 start_flags.go:288] no existing cluster config was found, will generate one from the flags 
	I0207 19:16:14.494502    6881 start_flags.go:369] Using suggested 8000MB memory alloc based on sys=32104MB, container=32104MB
	I0207 19:16:14.494647    6881 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	I0207 19:16:14.494671    6881 start_flags.go:813] Wait components to verify : map[apiserver:true system_pods:true]
	I0207 19:16:14.494702    6881 cni.go:93] Creating CNI manager for ""
	I0207 19:16:14.494718    6881 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0207 19:16:14.494732    6881 start_flags.go:302] config:
	{Name:download-only-20220207191613-6868 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220207191613-6868 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0207 19:16:14.497310    6881 cache.go:120] Beginning downloading kic base image for docker with docker
	I0207 19:16:14.499145    6881 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0207 19:16:14.499283    6881 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon
	I0207 19:16:14.540831    6881 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon, skipping pull
	I0207 19:16:14.540859    6881 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 exists in daemon, skipping load
	I0207 19:16:14.609550    6881 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.16.0/preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0207 19:16:14.609577    6881 cache.go:57] Caching tarball of preloaded images
	I0207 19:16:14.609921    6881 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0207 19:16:14.612496    6881 preload.go:238] getting checksum for preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0207 19:16:14.731451    6881 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.16.0/preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:0c23f68e9d9de4489f09a530426fd1e3 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220207191613-6868"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:176: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3/json-events (7.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3/json-events
aaa_download_only_test.go:73: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220207191613-6868 --force --alsologtostderr --kubernetes-version=v1.23.3 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:73: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220207191613-6868 --force --alsologtostderr --kubernetes-version=v1.23.3 --container-runtime=docker --driver=docker  --container-runtime=docker: (7.169328188s)
--- PASS: TestDownloadOnly/v1.23.3/json-events (7.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3/preload-exists
--- PASS: TestDownloadOnly/v1.23.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3/LogsDuration
aaa_download_only_test.go:175: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220207191613-6868
aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220207191613-6868: exit status 85 (77.685844ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/02/07 19:16:23
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.17.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0207 19:16:23.844953    7029 out.go:297] Setting OutFile to fd 1 ...
	I0207 19:16:23.845048    7029 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:16:23.845052    7029 out.go:310] Setting ErrFile to fd 2...
	I0207 19:16:23.845055    7029 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:16:23.845156    7029 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/bin
	W0207 19:16:23.845277    7029 root.go:293] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/config/config.json: no such file or directory
	I0207 19:16:23.845389    7029 out.go:304] Setting JSON to true
	I0207 19:16:23.846247    7029 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3540,"bootTime":1644257844,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0207 19:16:23.846336    7029 start.go:122] virtualization: kvm guest
	I0207 19:16:23.849406    7029 notify.go:174] Checking for updates...
	I0207 19:16:23.851594    7029 config.go:176] Loaded profile config "download-only-20220207191613-6868": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0207 19:16:23.851655    7029 start.go:706] api.Load failed for download-only-20220207191613-6868: filestore "download-only-20220207191613-6868": Docker machine "download-only-20220207191613-6868" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0207 19:16:23.851702    7029 driver.go:344] Setting default libvirt URI to qemu:///system
	W0207 19:16:23.851728    7029 start.go:706] api.Load failed for download-only-20220207191613-6868: filestore "download-only-20220207191613-6868": Docker machine "download-only-20220207191613-6868" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0207 19:16:23.894205    7029 docker.go:132] docker version: linux-20.10.12
	I0207 19:16:23.894322    7029 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 19:16:23.991881    7029 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:48 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:33 SystemTime:2022-02-07 19:16:23.923216713 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663643648 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0207 19:16:23.991997    7029 docker.go:237] overlay module found
	I0207 19:16:23.994619    7029 start.go:281] selected driver: docker
	I0207 19:16:23.994639    7029 start.go:798] validating driver "docker" against &{Name:download-only-20220207191613-6868 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220207191613-6868 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0207 19:16:23.994909    7029 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 19:16:24.092794    7029 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:48 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:33 SystemTime:2022-02-07 19:16:24.025498997 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663643648 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0207 19:16:24.093352    7029 cni.go:93] Creating CNI manager for ""
	I0207 19:16:24.093369    7029 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0207 19:16:24.093378    7029 start_flags.go:302] config:
	{Name:download-only-20220207191613-6868 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:download-only-20220207191613-6868 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0207 19:16:24.095931    7029 cache.go:120] Beginning downloading kic base image for docker with docker
	I0207 19:16:24.097630    7029 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 19:16:24.097800    7029 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon
	I0207 19:16:24.137203    7029 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon, skipping pull
	I0207 19:16:24.137235    7029 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 exists in daemon, skipping load
	I0207 19:16:24.205989    7029 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.3/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4
	I0207 19:16:24.206019    7029 cache.go:57] Caching tarball of preloaded images
	I0207 19:16:24.206325    7029 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
	I0207 19:16:24.208721    7029 preload.go:238] getting checksum for preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4 ...
	I0207 19:16:24.332987    7029 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.3/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4?checksum=md5:1c52b21a02ef67e2e4434a0c47aabce7 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220207191613-6868"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:176: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.4-rc.0/json-events (10.61s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.4-rc.0/json-events
aaa_download_only_test.go:73: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220207191613-6868 --force --alsologtostderr --kubernetes-version=v1.23.4-rc.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:73: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220207191613-6868 --force --alsologtostderr --kubernetes-version=v1.23.4-rc.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (10.612972527s)
--- PASS: TestDownloadOnly/v1.23.4-rc.0/json-events (10.61s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.4-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.4-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.23.4-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.4-rc.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.4-rc.0/LogsDuration
aaa_download_only_test.go:175: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220207191613-6868
aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220207191613-6868: exit status 85 (78.318315ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/02/07 19:16:31
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.17.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0207 19:16:31.091706    7181 out.go:297] Setting OutFile to fd 1 ...
	I0207 19:16:31.091801    7181 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:16:31.091806    7181 out.go:310] Setting ErrFile to fd 2...
	I0207 19:16:31.091809    7181 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:16:31.091928    7181 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/bin
	W0207 19:16:31.092038    7181 root.go:293] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/config/config.json: no such file or directory
	I0207 19:16:31.092144    7181 out.go:304] Setting JSON to true
	I0207 19:16:31.093048    7181 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3547,"bootTime":1644257844,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0207 19:16:31.093134    7181 start.go:122] virtualization: kvm guest
	I0207 19:16:31.096250    7181 notify.go:174] Checking for updates...
	I0207 19:16:31.098830    7181 config.go:176] Loaded profile config "download-only-20220207191613-6868": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	W0207 19:16:31.098882    7181 start.go:706] api.Load failed for download-only-20220207191613-6868: filestore "download-only-20220207191613-6868": Docker machine "download-only-20220207191613-6868" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0207 19:16:31.098941    7181 driver.go:344] Setting default libvirt URI to qemu:///system
	W0207 19:16:31.098970    7181 start.go:706] api.Load failed for download-only-20220207191613-6868: filestore "download-only-20220207191613-6868": Docker machine "download-only-20220207191613-6868" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0207 19:16:31.137553    7181 docker.go:132] docker version: linux-20.10.12
	I0207 19:16:31.137678    7181 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 19:16:31.232505    7181 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:48 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:33 SystemTime:2022-02-07 19:16:31.166819067 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663643648 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0207 19:16:31.232613    7181 docker.go:237] overlay module found
	I0207 19:16:31.235078    7181 start.go:281] selected driver: docker
	I0207 19:16:31.235101    7181 start.go:798] validating driver "docker" against &{Name:download-only-20220207191613-6868 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:download-only-20220207191613-6868 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0207 19:16:31.235363    7181 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 19:16:31.330283    7181 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:48 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:33 SystemTime:2022-02-07 19:16:31.264332696 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663643648 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0207 19:16:31.330939    7181 cni.go:93] Creating CNI manager for ""
	I0207 19:16:31.330957    7181 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0207 19:16:31.330965    7181 start_flags.go:302] config:
	{Name:download-only-20220207191613-6868 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.4-rc.0 ClusterName:download-only-20220207191613-6868 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0207 19:16:31.333504    7181 cache.go:120] Beginning downloading kic base image for docker with docker
	I0207 19:16:31.335430    7181 preload.go:132] Checking if preload exists for k8s version v1.23.4-rc.0 and runtime docker
	I0207 19:16:31.335478    7181 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon
	I0207 19:16:31.376812    7181 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 in local docker daemon, skipping pull
	I0207 19:16:31.376848    7181 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 exists in daemon, skipping load
	I0207 19:16:31.456104    7181 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.4-rc.0/preloaded-images-k8s-v17-v1.23.4-rc.0-docker-overlay2-amd64.tar.lz4
	I0207 19:16:31.456140    7181 cache.go:57] Caching tarball of preloaded images
	I0207 19:16:31.456559    7181 preload.go:132] Checking if preload exists for k8s version v1.23.4-rc.0 and runtime docker
	I0207 19:16:31.459153    7181 preload.go:238] getting checksum for preloaded-images-k8s-v17-v1.23.4-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0207 19:16:31.573540    7181 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.4-rc.0/preloaded-images-k8s-v17-v1.23.4-rc.0-docker-overlay2-amd64.tar.lz4?checksum=md5:d735572711ef4032ba979f3c4f19cb7e -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-rc.0-docker-overlay2-amd64.tar.lz4
	I0207 19:16:39.870428    7181 preload.go:249] saving checksum for preloaded-images-k8s-v17-v1.23.4-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0207 19:16:39.870523    7181 preload.go:256] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.4-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0207 19:16:40.902231    7181 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.4-rc.0 on docker
	I0207 19:16:40.902403    7181 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/download-only-20220207191613-6868/config.json ...
	I0207 19:16:40.902635    7181 preload.go:132] Checking if preload exists for k8s version v1.23.4-rc.0 and runtime docker
	I0207 19:16:40.902833    7181 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.23.4-rc.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.4-rc.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/cache/linux/v1.23.4-rc.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220207191613-6868"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:176: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.4-rc.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:193: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.36s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:205: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20220207191613-6868
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnlyKic (30.07s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:230: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-20220207191642-6868 --force --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:230: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-20220207191642-6868 --force --alsologtostderr --driver=docker  --container-runtime=docker: (28.772659768s)
helpers_test.go:176: Cleaning up "download-docker-20220207191642-6868" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-20220207191642-6868
--- PASS: TestDownloadOnlyKic (30.07s)

                                                
                                    
x
+
TestBinaryMirror (0.94s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:316: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-20220207191712-6868 --alsologtostderr --binary-mirror http://127.0.0.1:35337 --driver=docker  --container-runtime=docker
helpers_test.go:176: Cleaning up "binary-mirror-20220207191712-6868" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-20220207191712-6868
--- PASS: TestBinaryMirror (0.94s)

                                                
                                    
x
+
TestOffline (71.28s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-20220207194023-6868 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-20220207194023-6868 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m8.876497974s)
helpers_test.go:176: Cleaning up "offline-docker-20220207194023-6868" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-20220207194023-6868

                                                
                                                
=== CONT  TestOffline
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-20220207194023-6868: (2.403234516s)
--- PASS: TestOffline (71.28s)

                                                
                                    
x
+
TestAddons/Setup (122.69s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-linux-amd64 start -p addons-20220207191713-6868 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-linux-amd64 start -p addons-20220207191713-6868 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m2.687315931s)
--- PASS: TestAddons/Setup (122.69s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:281: registry stabilized in 17.964469ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-8d5lt" [d62c9d6f-523c-4506-bf44-47cc6524534e] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.010267129s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:343: "registry-proxy-46mmg" [40ba9d90-cabb-49f2-8fe2-b89aa6ba705b] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00781537s
addons_test.go:291: (dbg) Run:  kubectl --context addons-20220207191713-6868 delete po -l run=registry-test --now
addons_test.go:296: (dbg) Run:  kubectl --context addons-20220207191713-6868 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:296: (dbg) Done: kubectl --context addons-20220207191713-6868 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.153159045s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220207191713-6868 ip

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:339: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220207191713-6868 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.95s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:163: (dbg) Run:  kubectl --context addons-20220207191713-6868 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-20220207191713-6868 replace --force -f testdata/nginx-ingress-v1.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:196: (dbg) Run:  kubectl --context addons-20220207191713-6868 replace --force -f testdata/nginx-pod-svc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:201: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [9e5e139d-86d4-4acc-96c0-ee694956f129] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:343: "nginx" [9e5e139d-86d4-4acc-96c0-ee694956f129] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:201: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.077309988s
addons_test.go:213: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220207191713-6868 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:237: (dbg) Run:  kubectl --context addons-20220207191713-6868 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220207191713-6868 ip
addons_test.go:248: (dbg) Run:  nslookup hello-john.test 192.168.49.2

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220207191713-6868 addons disable ingress-dns --alsologtostderr -v=1
2022/02/07 19:19:29 [DEBUG] GET http://192.168.49.2:5000

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220207191713-6868 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:262: (dbg) Done: out/minikube-linux-amd64 -p addons-20220207191713-6868 addons disable ingress --alsologtostderr -v=1: (7.69018103s)
--- PASS: TestAddons/parallel/Ingress (21.95s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:358: metrics-server stabilized in 2.130535ms
addons_test.go:360: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:343: "metrics-server-6b76bd68b6-6hvrr" [351a88a9-719b-4d66-b32c-c0dcf3fda71e] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:360: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.007495456s
addons_test.go:366: (dbg) Run:  kubectl --context addons-20220207191713-6868 top pods -n kube-system
addons_test.go:383: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220207191713-6868 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:383: (dbg) Done: out/minikube-linux-amd64 -p addons-20220207191713-6868 addons disable metrics-server --alsologtostderr -v=1: (1.19473086s)
--- PASS: TestAddons/parallel/MetricsServer (6.27s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.35s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:407: tiller-deploy stabilized in 18.229323ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:409: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:343: "tiller-deploy-6d67d5465d-dck85" [f6c354c8-2423-4a2b-ae82-5e78aa7ef33f] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:409: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.008983984s
addons_test.go:424: (dbg) Run:  kubectl --context addons-20220207191713-6868 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:424: (dbg) Done: kubectl --context addons-20220207191713-6868 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.886958323s)
addons_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220207191713-6868 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.35s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.94s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:512: csi-hostpath-driver pods stabilized in 17.147633ms
addons_test.go:515: (dbg) Run:  kubectl --context addons-20220207191713-6868 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:520: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20220207191713-6868 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:525: (dbg) Run:  kubectl --context addons-20220207191713-6868 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:530: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:343: "task-pv-pod" [cddf56e8-496d-4d35-8ea9-f3581f79f645] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [cddf56e8-496d-4d35-8ea9-f3581f79f645] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [cddf56e8-496d-4d35-8ea9-f3581f79f645] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:530: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 25.007156102s
addons_test.go:535: (dbg) Run:  kubectl --context addons-20220207191713-6868 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:540: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20220207191713-6868 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20220207191713-6868 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:545: (dbg) Run:  kubectl --context addons-20220207191713-6868 delete pod task-pv-pod
addons_test.go:551: (dbg) Run:  kubectl --context addons-20220207191713-6868 delete pvc hpvc
addons_test.go:557: (dbg) Run:  kubectl --context addons-20220207191713-6868 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:562: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20220207191713-6868 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:567: (dbg) Run:  kubectl --context addons-20220207191713-6868 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:572: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:343: "task-pv-pod-restore" [c85eab30-ee11-4aac-8538-e7d070fe03e9] Pending
helpers_test.go:343: "task-pv-pod-restore" [c85eab30-ee11-4aac-8538-e7d070fe03e9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:343: "task-pv-pod-restore" [c85eab30-ee11-4aac-8538-e7d070fe03e9] Running
addons_test.go:572: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.005732369s
addons_test.go:577: (dbg) Run:  kubectl --context addons-20220207191713-6868 delete pod task-pv-pod-restore
addons_test.go:581: (dbg) Run:  kubectl --context addons-20220207191713-6868 delete pvc hpvc-restore
addons_test.go:585: (dbg) Run:  kubectl --context addons-20220207191713-6868 delete volumesnapshot new-snapshot-demo
addons_test.go:589: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220207191713-6868 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:589: (dbg) Done: out/minikube-linux-amd64 -p addons-20220207191713-6868 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.9716946s)
addons_test.go:593: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220207191713-6868 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (47.94s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (39.31s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:604: (dbg) Run:  kubectl --context addons-20220207191713-6868 create -f testdata/busybox.yaml
addons_test.go:610: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [da096e43-fbd3-4ea2-8b17-3f7acc830af4] Pending
helpers_test.go:343: "busybox" [da096e43-fbd3-4ea2-8b17-3f7acc830af4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [da096e43-fbd3-4ea2-8b17-3f7acc830af4] Running
addons_test.go:610: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 8.00607975s
addons_test.go:616: (dbg) Run:  kubectl --context addons-20220207191713-6868 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:653: (dbg) Run:  kubectl --context addons-20220207191713-6868 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:666: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220207191713-6868 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:666: (dbg) Done: out/minikube-linux-amd64 -p addons-20220207191713-6868 addons disable gcp-auth --alsologtostderr -v=1: (6.006795225s)
addons_test.go:682: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220207191713-6868 addons enable gcp-auth
addons_test.go:682: (dbg) Done: out/minikube-linux-amd64 -p addons-20220207191713-6868 addons enable gcp-auth: (2.965867953s)
addons_test.go:688: (dbg) Run:  kubectl --context addons-20220207191713-6868 apply -f testdata/private-image.yaml
addons_test.go:695: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:343: "private-image-7f8587d5b7-7z7cq" [2809d629-d275-429b-9bc5-064b0eae28d4] Pending
helpers_test.go:343: "private-image-7f8587d5b7-7z7cq" [2809d629-d275-429b-9bc5-064b0eae28d4] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])
helpers_test.go:343: "private-image-7f8587d5b7-7z7cq" [2809d629-d275-429b-9bc5-064b0eae28d4] Running
addons_test.go:695: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image healthy within 14.006771315s
addons_test.go:701: (dbg) Run:  kubectl --context addons-20220207191713-6868 apply -f testdata/private-image-eu.yaml
addons_test.go:706: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:343: "private-image-eu-869dcfd8c7-7nhzp" [3fded01e-9e9c-4646-ab47-80e91573a129] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])
helpers_test.go:343: "private-image-eu-869dcfd8c7-7nhzp" [3fded01e-9e9c-4646-ab47-80e91573a129] Running
addons_test.go:706: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image-eu healthy within 7.005818452s
--- PASS: TestAddons/serial/GCPAuth (39.31s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.48s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:133: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-20220207191713-6868
addons_test.go:133: (dbg) Done: out/minikube-linux-amd64 stop -p addons-20220207191713-6868: (11.279442957s)
addons_test.go:137: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-20220207191713-6868
addons_test.go:141: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-20220207191713-6868
--- PASS: TestAddons/StoppedEnableDisable (11.48s)

                                                
                                    
x
+
TestCertOptions (37.73s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20220207194401-6868 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:50: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20220207194401-6868 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (34.065938437s)
cert_options_test.go:61: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20220207194401-6868 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:89: (dbg) Run:  kubectl --context cert-options-20220207194401-6868 config view
cert_options_test.go:101: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-20220207194401-6868 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-20220207194401-6868" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20220207194401-6868

                                                
                                                
=== CONT  TestCertOptions
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20220207194401-6868: (2.835333332s)
--- PASS: TestCertOptions (37.73s)

                                                
                                    
x
+
TestCertExpiration (221.47s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:124: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220207194331-6868 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:124: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220207194331-6868 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (34.951363524s)
E0207 19:44:16.238309    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/addons-20220207191713-6868/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220207194331-6868 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220207194331-6868 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (4.022747731s)
helpers_test.go:176: Cleaning up "cert-expiration-20220207194331-6868" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-20220207194331-6868
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-20220207194331-6868: (2.49184205s)
--- PASS: TestCertExpiration (221.47s)

                                                
                                    
x
+
TestDockerFlags (242.1s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:46: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-20220207194358-6868 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:46: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-20220207194358-6868 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (3m58.421889928s)
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-20220207194358-6868 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:62: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-20220207194358-6868 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:176: Cleaning up "docker-flags-20220207194358-6868" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-20220207194358-6868
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-20220207194358-6868: (2.841955975s)
--- PASS: TestDockerFlags (242.10s)

                                                
                                    
x
+
TestForceSystemdFlag (43.36s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20220207194159-6868 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20220207194159-6868 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (40.078020421s)
docker_test.go:105: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-20220207194159-6868 ssh "docker info --format {{.CgroupDriver}}"

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:176: Cleaning up "force-systemd-flag-20220207194159-6868" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20220207194159-6868

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20220207194159-6868: (2.663786823s)
--- PASS: TestForceSystemdFlag (43.36s)

                                                
                                    
x
+
TestForceSystemdEnv (48.57s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:151: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20220207194242-6868 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:151: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20220207194242-6868 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (45.374136931s)
docker_test.go:105: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-20220207194242-6868 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-env-20220207194242-6868" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20220207194242-6868
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20220207194242-6868: (2.695695859s)
--- PASS: TestForceSystemdEnv (48.57s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.59s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.59s)

                                                
                                    
x
+
TestErrorSpam/setup (27.66s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:79: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20220207192057-6868 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220207192057-6868 --driver=docker  --container-runtime=docker
error_spam_test.go:79: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20220207192057-6868 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220207192057-6868 --driver=docker  --container-runtime=docker: (27.657775321s)
error_spam_test.go:89: acceptable stderr: "! Your cgroup does not allow setting memory."
--- PASS: TestErrorSpam/setup (27.66s)

                                                
                                    
x
+
TestErrorSpam/start (0.96s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:214: Cleaning up 1 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220207192057-6868 --log_dir /tmp/nospam-20220207192057-6868 start --dry-run
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220207192057-6868 --log_dir /tmp/nospam-20220207192057-6868 start --dry-run
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220207192057-6868 --log_dir /tmp/nospam-20220207192057-6868 start --dry-run
--- PASS: TestErrorSpam/start (0.96s)

                                                
                                    
x
+
TestErrorSpam/status (1.23s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220207192057-6868 --log_dir /tmp/nospam-20220207192057-6868 status
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220207192057-6868 --log_dir /tmp/nospam-20220207192057-6868 status
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220207192057-6868 --log_dir /tmp/nospam-20220207192057-6868 status
--- PASS: TestErrorSpam/status (1.23s)

                                                
                                    
x
+
TestErrorSpam/pause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220207192057-6868 --log_dir /tmp/nospam-20220207192057-6868 pause
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220207192057-6868 --log_dir /tmp/nospam-20220207192057-6868 pause
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220207192057-6868 --log_dir /tmp/nospam-20220207192057-6868 pause
--- PASS: TestErrorSpam/pause (1.52s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220207192057-6868 --log_dir /tmp/nospam-20220207192057-6868 unpause
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220207192057-6868 --log_dir /tmp/nospam-20220207192057-6868 unpause
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220207192057-6868 --log_dir /tmp/nospam-20220207192057-6868 unpause
--- PASS: TestErrorSpam/unpause (1.69s)

                                                
                                    
x
+
TestErrorSpam/stop (11.03s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220207192057-6868 --log_dir /tmp/nospam-20220207192057-6868 stop
error_spam_test.go:157: (dbg) Done: out/minikube-linux-amd64 -p nospam-20220207192057-6868 --log_dir /tmp/nospam-20220207192057-6868 stop: (10.717417267s)
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220207192057-6868 --log_dir /tmp/nospam-20220207192057-6868 stop
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220207192057-6868 --log_dir /tmp/nospam-20220207192057-6868 stop
--- PASS: TestErrorSpam/stop (11.03s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1715: local sync path: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/files/etc/test/nested/copy/6868/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (43.59s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2097: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220207192144-6868 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2097: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220207192144-6868 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (43.587214451s)
--- PASS: TestFunctional/serial/StartWithProxy (43.59s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.13s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220207192144-6868 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220207192144-6868 --alsologtostderr -v=8: (5.125036163s)
functional_test.go:659: soft start took 5.126346696s for "functional-20220207192144-6868" cluster.
--- PASS: TestFunctional/serial/SoftStart (5.13s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-20220207192144-6868 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1050: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 cache add k8s.gcr.io/pause:3.1
functional_test.go:1050: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 cache add k8s.gcr.io/pause:3.3
functional_test.go:1050: (dbg) Done: out/minikube-linux-amd64 -p functional-20220207192144-6868 cache add k8s.gcr.io/pause:3.3: (1.524281394s)
functional_test.go:1050: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 cache add k8s.gcr.io/pause:latest
functional_test.go:1050: (dbg) Done: out/minikube-linux-amd64 -p functional-20220207192144-6868 cache add k8s.gcr.io/pause:latest: (1.10665136s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1081: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220207192144-6868 /tmp/functional-20220207192144-68681578652743
functional_test.go:1093: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 cache add minikube-local-cache-test:functional-20220207192144-6868
functional_test.go:1093: (dbg) Done: out/minikube-linux-amd64 -p functional-20220207192144-6868 cache add minikube-local-cache-test:functional-20220207192144-6868: (1.380408614s)
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 cache delete minikube-local-cache-test:functional-20220207192144-6868
functional_test.go:1087: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220207192144-6868
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1114: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1128: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1151: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1157: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1157: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220207192144-6868 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (363.040658ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 cache reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1176: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1176: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 kubectl -- --context functional-20220207192144-6868 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-20220207192144-6868 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (28.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220207192144-6868 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220207192144-6868 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (28.079582497s)
functional_test.go:757: restart took 28.079693894s for "functional-20220207192144-6868" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (28.08s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:811: (dbg) Run:  kubectl --context functional-20220207192144-6868 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:826: etcd phase: Running
functional_test.go:836: etcd status: Ready
functional_test.go:826: kube-apiserver phase: Running
functional_test.go:836: kube-apiserver status: Ready
functional_test.go:826: kube-controller-manager phase: Running
functional_test.go:836: kube-controller-manager status: Ready
functional_test.go:826: kube-scheduler phase: Running
functional_test.go:836: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1240: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 logs
functional_test.go:1240: (dbg) Done: out/minikube-linux-amd64 -p functional-20220207192144-6868 logs: (1.303071357s)
--- PASS: TestFunctional/serial/LogsCmd (1.30s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1257: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 logs --file /tmp/functional-20220207192144-68683799824631/logs.txt
functional_test.go:1257: (dbg) Done: out/minikube-linux-amd64 -p functional-20220207192144-6868 logs --file /tmp/functional-20220207192144-68683799824631/logs.txt: (1.301595125s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1203: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1203: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1203: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220207192144-6868 config get cpus: exit status 14 (73.976353ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1203: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 config set cpus 2
functional_test.go:1203: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 config get cpus
functional_test.go:1203: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1203: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1203: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220207192144-6868 config get cpus: exit status 14 (78.328122ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (3.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:906: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220207192144-6868 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:911: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220207192144-6868 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to kill pid 46031: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (3.86s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:975: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220207192144-6868 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220207192144-6868 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (538.201843ms)

                                                
                                                
-- stdout --
	* [functional-20220207192144-6868] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13439
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0207 19:23:42.453346   44901 out.go:297] Setting OutFile to fd 1 ...
	I0207 19:23:42.453464   44901 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:23:42.453477   44901 out.go:310] Setting ErrFile to fd 2...
	I0207 19:23:42.453483   44901 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:23:42.453627   44901 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/bin
	I0207 19:23:42.453960   44901 out.go:304] Setting JSON to false
	I0207 19:23:42.455601   44901 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3979,"bootTime":1644257844,"procs":489,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0207 19:23:42.455700   44901 start.go:122] virtualization: kvm guest
	I0207 19:23:42.596274   44901 out.go:176] * [functional-20220207192144-6868] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	I0207 19:23:42.604837   44901 out.go:176]   - MINIKUBE_LOCATION=13439
	I0207 19:23:42.606888   44901 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0207 19:23:42.708522   44901 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/kubeconfig
	I0207 19:23:42.742201   44901 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube
	I0207 19:23:42.743896   44901 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0207 19:23:42.744541   44901 config.go:176] Loaded profile config "functional-20220207192144-6868": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 19:23:42.745109   44901 driver.go:344] Setting default libvirt URI to qemu:///system
	I0207 19:23:42.796051   44901 docker.go:132] docker version: linux-20.10.12
	I0207 19:23:42.796131   44901 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 19:23:42.913052   44901 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:49 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2022-02-07 19:23:42.835123204 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663643648 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0207 19:23:42.913170   44901 docker.go:237] overlay module found
	I0207 19:23:42.915816   44901 out.go:176] * Using the docker driver based on existing profile
	I0207 19:23:42.915859   44901 start.go:281] selected driver: docker
	I0207 19:23:42.915871   44901 start.go:798] validating driver "docker" against &{Name:functional-20220207192144-6868 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:functional-20220207192144-6868 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0207 19:23:42.916051   44901 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0207 19:23:42.916098   44901 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0207 19:23:42.916123   44901 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0207 19:23:42.917708   44901 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0207 19:23:42.919975   44901 out.go:176] 
	W0207 19:23:42.920099   44901 out.go:241] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0207 19:23:42.921489   44901 out.go:176] 

                                                
                                                
** /stderr **
functional_test.go:992: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220207192144-6868 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1021: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220207192144-6868 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1021: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220207192144-6868 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (244.812137ms)

                                                
                                                
-- stdout --
	* [functional-20220207192144-6868] minikube v1.25.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13439
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Utilisation du pilote docker basé sur le profil existant
	  - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0207 19:23:30.420011   42254 out.go:297] Setting OutFile to fd 1 ...
	I0207 19:23:30.420159   42254 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:23:30.420169   42254 out.go:310] Setting ErrFile to fd 2...
	I0207 19:23:30.420174   42254 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:23:30.420317   42254 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/bin
	I0207 19:23:30.420546   42254 out.go:304] Setting JSON to false
	I0207 19:23:30.421653   42254 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3967,"bootTime":1644257844,"procs":486,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0207 19:23:30.421735   42254 start.go:122] virtualization: kvm guest
	I0207 19:23:30.425294   42254 out.go:176] * [functional-20220207192144-6868] minikube v1.25.1 sur Ubuntu 20.04 (kvm/amd64)
	I0207 19:23:30.426913   42254 out.go:176]   - MINIKUBE_LOCATION=13439
	I0207 19:23:30.428413   42254 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0207 19:23:30.429797   42254 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/kubeconfig
	I0207 19:23:30.431390   42254 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube
	I0207 19:23:30.432630   42254 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0207 19:23:30.433064   42254 config.go:176] Loaded profile config "functional-20220207192144-6868": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 19:23:30.433460   42254 driver.go:344] Setting default libvirt URI to qemu:///system
	I0207 19:23:30.485801   42254 docker.go:132] docker version: linux-20.10.12
	I0207 19:23:30.485894   42254 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 19:23:30.588883   42254 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:49 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:41 SystemTime:2022-02-07 19:23:30.514675692 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663643648 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0207 19:23:30.588998   42254 docker.go:237] overlay module found
	I0207 19:23:30.592661   42254 out.go:176] * Utilisation du pilote docker basé sur le profil existant
	I0207 19:23:30.592687   42254 start.go:281] selected driver: docker
	I0207 19:23:30.592700   42254 start.go:798] validating driver "docker" against &{Name:functional-20220207192144-6868 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1643823806-13302@sha256:9cde8d533c45fa1d8936c4f658fe1f9983662f8d5e3e839a8ae15cbe69f5b4a8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:functional-20220207192144-6868 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0207 19:23:30.592813   42254 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0207 19:23:30.592845   42254 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0207 19:23:30.592863   42254 out.go:241] ! Votre groupe de contrôle ne permet pas de définir la mémoire.
	! Votre groupe de contrôle ne permet pas de définir la mémoire.
	I0207 19:23:30.594464   42254 out.go:176]   - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0207 19:23:30.596330   42254 out.go:176] 
	W0207 19:23:30.596455   42254 out.go:241] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0207 19:23:30.597922   42254 out.go:176] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:855: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:861: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:873: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (15.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1439: (dbg) Run:  kubectl --context functional-20220207192144-6868 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-20220207192144-6868 expose deployment hello-node --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:343: "hello-node-54fbb85-drvdd" [fb4f6f2e-f4c3-4fac-9c6b-c5c3dc4664a0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:343: "hello-node-54fbb85-drvdd" [fb4f6f2e-f4c3-4fac-9c6b-c5c3dc4664a0] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 13.018188293s
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 service list
functional_test.go:1455: (dbg) Done: out/minikube-linux-amd64 -p functional-20220207192144-6868 service list: (1.03363557s)
functional_test.go:1468: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 service --namespace=default --https --url hello-node
functional_test.go:1477: found endpoint: https://192.168.49.2:32739
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 service hello-node --url --format={{.IP}}
functional_test.go:1497: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 service hello-node --url
functional_test.go:1503: found endpoint for hello-node: http://192.168.49.2:32739
functional_test.go:1514: Attempting to fetch http://192.168.49.2:32739 ...
functional_test.go:1534: http://192.168.49.2:32739: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-54fbb85-drvdd

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=172.17.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32739
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmd (15.91s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1549: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 addons list

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1561: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (38.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:343: "storage-provisioner" [f1ce289e-f308-4195-8f33-ae4e2ec4e9a9] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00691261s
functional_test_pvc_test.go:50: (dbg) Run:  kubectl --context functional-20220207192144-6868 get storageclass -o=json
functional_test_pvc_test.go:70: (dbg) Run:  kubectl --context functional-20220207192144-6868 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20220207192144-6868 get pvc myclaim -o=json
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20220207192144-6868 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [72449392-207f-4b8d-8778-22b5d6076125] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [72449392-207f-4b8d-8778-22b5d6076125] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [72449392-207f-4b8d-8778-22b5d6076125] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.007253962s
functional_test_pvc_test.go:101: (dbg) Run:  kubectl --context functional-20220207192144-6868 exec sp-pod -- touch /tmp/mount/foo

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:107: (dbg) Run:  kubectl --context functional-20220207192144-6868 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20220207192144-6868 apply -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [3a229e4f-667e-41d4-930c-a50b0973f1f7] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [3a229e4f-667e-41d4-930c-a50b0973f1f7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [3a229e4f-667e-41d4-930c-a50b0973f1f7] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.006260772s
functional_test_pvc_test.go:115: (dbg) Run:  kubectl --context functional-20220207192144-6868 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (38.95s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1584: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1601: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 ssh -n functional-20220207192144-6868 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 cp functional-20220207192144-6868:/home/docker/cp-test.txt /tmp/mk_test926915592/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 ssh -n functional-20220207192144-6868 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1653: (dbg) Run:  kubectl --context functional-20220207192144-6868 replace --force -f testdata/mysql.yaml
functional_test.go:1659: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:343: "mysql-b87c45988-7rh29" [aa2ec914-30bd-42dd-86e9-a6be1c8fd732] Pending
helpers_test.go:343: "mysql-b87c45988-7rh29" [aa2ec914-30bd-42dd-86e9-a6be1c8fd732] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-b87c45988-7rh29" [aa2ec914-30bd-42dd-86e9-a6be1c8fd732] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1659: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.006859071s
functional_test.go:1667: (dbg) Run:  kubectl --context functional-20220207192144-6868 exec mysql-b87c45988-7rh29 -- mysql -ppassword -e "show databases;"
functional_test.go:1667: (dbg) Non-zero exit: kubectl --context functional-20220207192144-6868 exec mysql-b87c45988-7rh29 -- mysql -ppassword -e "show databases;": exit status 1 (231.828578ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1667: (dbg) Run:  kubectl --context functional-20220207192144-6868 exec mysql-b87c45988-7rh29 -- mysql -ppassword -e "show databases;"
functional_test.go:1667: (dbg) Non-zero exit: kubectl --context functional-20220207192144-6868 exec mysql-b87c45988-7rh29 -- mysql -ppassword -e "show databases;": exit status 1 (159.747804ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1667: (dbg) Run:  kubectl --context functional-20220207192144-6868 exec mysql-b87c45988-7rh29 -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1667: (dbg) Non-zero exit: kubectl --context functional-20220207192144-6868 exec mysql-b87c45988-7rh29 -- mysql -ppassword -e "show databases;": exit status 1 (182.901333ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1667: (dbg) Run:  kubectl --context functional-20220207192144-6868 exec mysql-b87c45988-7rh29 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.95s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1789: Checking for existence of /etc/test/nested/copy/6868/hosts within VM
functional_test.go:1791: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 ssh "sudo cat /etc/test/nested/copy/6868/hosts"
functional_test.go:1796: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1832: Checking for existence of /etc/ssl/certs/6868.pem within VM
functional_test.go:1833: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 ssh "sudo cat /etc/ssl/certs/6868.pem"
functional_test.go:1832: Checking for existence of /usr/share/ca-certificates/6868.pem within VM
functional_test.go:1833: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 ssh "sudo cat /usr/share/ca-certificates/6868.pem"
functional_test.go:1832: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1833: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1859: Checking for existence of /etc/ssl/certs/68682.pem within VM
functional_test.go:1860: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 ssh "sudo cat /etc/ssl/certs/68682.pem"
functional_test.go:1859: Checking for existence of /usr/share/ca-certificates/68682.pem within VM
functional_test.go:1860: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 ssh "sudo cat /usr/share/ca-certificates/68682.pem"
functional_test.go:1859: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1860: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-20220207192144-6868 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1887: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 ssh "sudo systemctl is-active crio"
functional_test.go:1887: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220207192144-6868 ssh "sudo systemctl is-active crio": exit status 1 (374.508528ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1280: (dbg) Run:  out/minikube-linux-amd64 profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2133: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2133: (dbg) Done: out/minikube-linux-amd64 -p functional-20220207192144-6868 version -o=json --components: (1.489196783s)
--- PASS: TestFunctional/parallel/Version/components (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:128: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20220207192144-6868 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:148: (dbg) Run:  kubectl --context functional-20220207192144-6868 apply -f testdata/testsvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:152: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:343: "nginx-svc" [b68f91ea-5841-4523-8da1-66f6847b553b] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:343: "nginx-svc" [b68f91ea-5841-4523-8da1-66f6847b553b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:343: "nginx-svc" [b68f91ea-5841-4523-8da1-66f6847b553b] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:152: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.015152493s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: Took "453.806376ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1334: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1339: Took "77.084485ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1371: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: Took "407.349777ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1384: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1389: Took "62.992624ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1979: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1979: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1979: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220207192144-6868 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:235: tunnel at http://10.101.200.172 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:370: (dbg) stopping [out/minikube-linux-amd64 -p functional-20220207192144-6868 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 image ls --format short

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220207192144-6868 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.23.3
k8s.gcr.io/kube-proxy:v1.23.3
k8s.gcr.io/kube-controller-manager:v1.23.3
k8s.gcr.io/kube-apiserver:v1.23.3
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20220207192144-6868
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20220207192144-6868
docker.io/kubernetesui/metrics-scraper:v1.0.7
docker.io/kubernetesui/dashboard:v2.3.1
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 image ls --format table

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220207192144-6868 image ls --format table:
|---------------------------------------------|--------------------------------|---------------|--------|
|                    Image                    |              Tag               |   Image ID    |  Size  |
|---------------------------------------------|--------------------------------|---------------|--------|
| k8s.gcr.io/pause                            | 3.1                            | da86e6ba6ca19 | 742kB  |
| k8s.gcr.io/pause                            | 3.3                            | 0184c1613d929 | 683kB  |
| k8s.gcr.io/pause                            | 3.6                            | 6270bb605e12e | 683kB  |
| docker.io/kubernetesui/metrics-scraper      | v1.0.7                         | 7801cfc6d5c07 | 34.4MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                             | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-20220207192144-6868 | ffd4cfbbe753e | 32.9MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                   | 56cc512116c8f | 4.4MB  |
| docker.io/library/nginx                     | alpine                         | bef258acf10dc | 23.4MB |
| docker.io/library/nginx                     | latest                         | c316d5a335a5c | 142MB  |
| k8s.gcr.io/kube-controller-manager          | v1.23.3                        | b07520cd7ab76 | 125MB  |
| docker.io/kubernetesui/dashboard            | v2.3.1                         | e1482a24335a6 | 220MB  |
| docker.io/library/mysql                     | 5.7                            | 0712d5dc1b147 | 448MB  |
| k8s.gcr.io/kube-apiserver                   | v1.23.3                        | f40be0088a83e | 135MB  |
| k8s.gcr.io/kube-scheduler                   | v1.23.3                        | 99a3486be4f28 | 53.5MB |
| k8s.gcr.io/kube-proxy                       | v1.23.3                        | 9b7cc99821098 | 112MB  |
| k8s.gcr.io/etcd                             | 3.5.1-0                        | 25f8c7f3da61c | 293MB  |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                         | a4ca41631cc7a | 46.8MB |
| k8s.gcr.io/echoserver                       | 1.8                            | 82e4c8a736a4f | 95.4MB |
| k8s.gcr.io/pause                            | latest                         | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-20220207192144-6868 | 90cbccdf6f1e7 | 30B    |
|---------------------------------------------|--------------------------------|---------------|--------|
2022/02/07 19:23:47 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 image ls --format json
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220207192144-6868 image ls --format json:
[{"id":"0712d5dc1b147bdda13b0a45d1b12ef5520539d28c2850ae450960bfdcdd20c7","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"448000000"},{"id":"99a3486be4f2837c939313935007928f97b81a1cf11495808d81ad6b14c04078","repoDigests":[],"repoTags":["k8s.gcr.io/kube-scheduler:v1.23.3"],"size":"53500000"},{"id":"25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d","repoDigests":[],"repoTags":["k8s.gcr.io/etcd:3.5.1-0"],"size":"293000000"},{"id":"a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":[],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"46800000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:v1.0.7"],"size":"34400000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigest
s":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220207192144-6868"],"size":"32900000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"b07520cd7ab76ec98ea6c07ae56d21d65f29708c24f90a55a3c30d823419577e","repoDigests":[],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.23.3"],"size":"125000000"},{"id":"e1482a24335a6e76d438ae175f79409004588570d3e5dbb4c8140e025e848570","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:v2.3.1"
],"size":"220000000"},{"id":"9b7cc9982109819e8fe5b0b6c0d3122790f88275e13b02f79e7e9e307466aa1b","repoDigests":[],"repoTags":["k8s.gcr.io/kube-proxy:v1.23.3"],"size":"112000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"90cbccdf6f1e78f7c57fad87556ab58df0daf196e88e7a51f2f8b19b7a3d2bd7","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220207192144-6868"],"size":"30"},{"id":"c316d5a335a5cf324b0dc83b3da82d7608724769f6454f6d9a621f3ec2534a5a","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"f40be0088a83e79642d0a2a1bbc55e61b9289167385e67701b82ea85fc9bbfc4","repoDigests":[],"repoTags":["k8s.gcr.io/kube-apiserver:v1.23.3"],"size":"135000000"},{"id":"bef258acf10dc257d641c4
7c3a600c92f87be4b4ce4a5e4752b3eade7533dcd9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23400000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 image ls --format yaml
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220207192144-6868 image ls --format yaml:
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: f40be0088a83e79642d0a2a1bbc55e61b9289167385e67701b82ea85fc9bbfc4
repoDigests: []
repoTags:
- k8s.gcr.io/kube-apiserver:v1.23.3
size: "135000000"
- id: bef258acf10dc257d641c47c3a600c92f87be4b4ce4a5e4752b3eade7533dcd9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23400000"
- id: 25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d
repoDigests: []
repoTags:
- k8s.gcr.io/etcd:3.5.1-0
size: "293000000"
- id: a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests: []
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "46800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 99a3486be4f2837c939313935007928f97b81a1cf11495808d81ad6b14c04078
repoDigests: []
repoTags:
- k8s.gcr.io/kube-scheduler:v1.23.3
size: "53500000"
- id: 9b7cc9982109819e8fe5b0b6c0d3122790f88275e13b02f79e7e9e307466aa1b
repoDigests: []
repoTags:
- k8s.gcr.io/kube-proxy:v1.23.3
size: "112000000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: e1482a24335a6e76d438ae175f79409004588570d3e5dbb4c8140e025e848570
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:v2.3.1
size: "220000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220207192144-6868
size: "32900000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 90cbccdf6f1e78f7c57fad87556ab58df0daf196e88e7a51f2f8b19b7a3d2bd7
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220207192144-6868
size: "30"
- id: 0712d5dc1b147bdda13b0a45d1b12ef5520539d28c2850ae450960bfdcdd20c7
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "448000000"
- id: c316d5a335a5cf324b0dc83b3da82d7608724769f6454f6d9a621f3ec2534a5a
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: b07520cd7ab76ec98ea6c07ae56d21d65f29708c24f90a55a3c30d823419577e
repoDigests: []
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.23.3
size: "125000000"
- id: 7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:v1.0.7
size: "34400000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220207192144-6868 ssh pgrep buildkitd: exit status 1 (388.961626ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 image build -t localhost/my-image:functional-20220207192144-6868 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p functional-20220207192144-6868 image build -t localhost/my-image:functional-20220207192144-6868 testdata/build: (2.443232612s)
functional_test.go:316: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220207192144-6868 image build -t localhost/my-image:functional-20220207192144-6868 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 2bd0f34b1aec
Removing intermediate container 2bd0f34b1aec
---> 75c5a981acc5
Step 3/3 : ADD content.txt /
---> cc0e1b2234ae
Successfully built cc0e1b2234ae
Successfully tagged localhost/my-image:functional-20220207192144-6868
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:338: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.969105247s)
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220207192144-6868
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220207192144-6868

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p functional-20220207192144-6868 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220207192144-6868: (6.297560515s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.68s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20220207192144-6868 docker-env) && out/minikube-linux-amd64 status -p functional-20220207192144-6868"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20220207192144-6868 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:76: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220207192144-6868 /tmp/mounttest1019347213:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:110: wrote "test-1644261810600520256" to /tmp/mounttest1019347213/created-by-test
functional_test_mount_test.go:110: wrote "test-1644261810600520256" to /tmp/mounttest1019347213/created-by-test-removed-by-pod
functional_test_mount_test.go:110: wrote "test-1644261810600520256" to /tmp/mounttest1019347213/test-1644261810600520256
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220207192144-6868 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (642.150293ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 ssh -- ls -la /mount-9p
functional_test_mount_test.go:136: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb  7 19:23 created-by-test
-rw-r--r-- 1 docker docker 24 Feb  7 19:23 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb  7 19:23 test-1644261810600520256
functional_test_mount_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 ssh cat /mount-9p/test-1644261810600520256
functional_test_mount_test.go:151: (dbg) Run:  kubectl --context functional-20220207192144-6868 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:343: "busybox-mount" [757311ab-36f6-44ea-a453-0d04c90ce7a9] Pending
helpers_test.go:343: "busybox-mount" [757311ab-36f6-44ea-a453-0d04c90ce7a9] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [757311ab-36f6-44ea-a453-0d04c90ce7a9] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.00746973s
functional_test_mount_test.go:172: (dbg) Run:  kubectl --context functional-20220207192144-6868 logs busybox-mount

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 ssh stat /mount-9p/created-by-test

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 ssh stat /mount-9p/created-by-pod

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:93: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:97: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220207192144-6868 /tmp/mounttest1019347213:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220207192144-6868

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Done: out/minikube-linux-amd64 -p functional-20220207192144-6868 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220207192144-6868: (3.366689494s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220207192144-6868
functional_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220207192144-6868

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p functional-20220207192144-6868 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220207192144-6868: (3.279541137s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 image save gcr.io/google-containers/addon-resizer:functional-20220207192144-6868 /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Done: out/minikube-linux-amd64 -p functional-20220207192144-6868 image save gcr.io/google-containers/addon-resizer:functional-20220207192144-6868 /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar: (1.056337055s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:225: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220207192144-6868 /tmp/mounttest1430386074:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220207192144-6868 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (377.174534ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:269: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 ssh -- ls -la /mount-9p
functional_test_mount_test.go:273: guest mount directory contents
total 0
functional_test_mount_test.go:275: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220207192144-6868 /tmp/mounttest1430386074:/mount-9p --alsologtostderr -v=1 --port 46464] ...

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:276: reading mount text
functional_test_mount_test.go:290: done reading mount text
functional_test_mount_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:242: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220207192144-6868 ssh "sudo umount -f /mount-9p": exit status 1 (439.946352ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:244: "out/minikube-linux-amd64 -p functional-20220207192144-6868 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:246: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220207192144-6868 /tmp/mounttest1430386074:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 image rm gcr.io/google-containers/addon-resizer:functional-20220207192144-6868

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 image load /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Done: out/minikube-linux-amd64 -p functional-20220207192144-6868 image load /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar: (1.41128896s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220207192144-6868

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220207192144-6868 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220207192144-6868

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p functional-20220207192144-6868 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220207192144-6868: (2.517437445s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220207192144-6868
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.60s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.1s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220207192144-6868
--- PASS: TestFunctional/delete_addon-resizer_images (0.10s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220207192144-6868
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220207192144-6868
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (61.76s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:40: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-20220207192354-6868 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0207 19:24:16.238143    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/addons-20220207191713-6868/client.crt: no such file or directory
E0207 19:24:16.243928    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/addons-20220207191713-6868/client.crt: no such file or directory
E0207 19:24:16.254296    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/addons-20220207191713-6868/client.crt: no such file or directory
E0207 19:24:16.274579    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/addons-20220207191713-6868/client.crt: no such file or directory
E0207 19:24:16.315145    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/addons-20220207191713-6868/client.crt: no such file or directory
E0207 19:24:16.395864    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/addons-20220207191713-6868/client.crt: no such file or directory
E0207 19:24:16.556648    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/addons-20220207191713-6868/client.crt: no such file or directory
E0207 19:24:16.877421    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/addons-20220207191713-6868/client.crt: no such file or directory
E0207 19:24:17.518026    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/addons-20220207191713-6868/client.crt: no such file or directory
E0207 19:24:18.799123    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/addons-20220207191713-6868/client.crt: no such file or directory
E0207 19:24:21.360148    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/addons-20220207191713-6868/client.crt: no such file or directory
E0207 19:24:26.480500    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/addons-20220207191713-6868/client.crt: no such file or directory
E0207 19:24:36.721656    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/addons-20220207191713-6868/client.crt: no such file or directory
ingress_addon_legacy_test.go:40: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-20220207192354-6868 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (1m1.757524082s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (61.76s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (16.27s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220207192354-6868 addons enable ingress --alsologtostderr -v=5
E0207 19:24:57.202442    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/addons-20220207191713-6868/client.crt: no such file or directory
ingress_addon_legacy_test.go:71: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220207192354-6868 addons enable ingress --alsologtostderr -v=5: (16.273746406s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (16.27s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.42s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:80: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220207192354-6868 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.42s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (31.09s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:163: (dbg) Run:  kubectl --context ingress-addon-legacy-20220207192354-6868 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:163: (dbg) Done: kubectl --context ingress-addon-legacy-20220207192354-6868 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (9.592624929s)
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20220207192354-6868 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:196: (dbg) Run:  kubectl --context ingress-addon-legacy-20220207192354-6868 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [89721b1c-7eca-489d-98f0-6db143d820fa] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:343: "nginx" [89721b1c-7eca-489d-98f0-6db143d820fa] Running
addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.006480425s
addons_test.go:213: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220207192354-6868 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:237: (dbg) Run:  kubectl --context ingress-addon-legacy-20220207192354-6868 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220207192354-6868 ip
addons_test.go:248: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220207192354-6868 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:257: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220207192354-6868 addons disable ingress-dns --alsologtostderr -v=1: (1.854029295s)
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220207192354-6868 addons disable ingress --alsologtostderr -v=1
E0207 19:25:38.163555    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/addons-20220207191713-6868/client.crt: no such file or directory
addons_test.go:262: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220207192354-6868 addons disable ingress --alsologtostderr -v=1: (7.285991777s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (31.09s)

                                                
                                    
x
+
TestJSONOutput/start/Command (43.11s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-20220207192546-6868 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-20220207192546-6868 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (43.109973377s)
--- PASS: TestJSONOutput/start/Command (43.11s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-20220207192546-6868 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-20220207192546-6868 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (11.1s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-20220207192546-6868 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-20220207192546-6868 --output=json --user=testUser: (11.095589455s)
--- PASS: TestJSONOutput/stop/Command (11.10s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.3s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20220207192644-6868 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20220207192644-6868 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (70.666442ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f55f5c7f-e15c-4b47-9146-10b103117536","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220207192644-6868] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9bf48a39-01c7-452a-a3e6-571a674f3461","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13439"}}
	{"specversion":"1.0","id":"7955fed3-23ba-43db-9c77-6f41c26a1e07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8d3df68f-5ae2-4521-ac9d-4e95063e3314","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/kubeconfig"}}
	{"specversion":"1.0","id":"39dc26fe-1019-4d68-bed2-ea0977b11527","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube"}}
	{"specversion":"1.0","id":"1707c5e7-bf52-4e63-a4bf-c5a4045be8f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"47fad6da-fbbc-4040-8f77-a184cf8a074d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-20220207192644-6868" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20220207192644-6868
--- PASS: TestErrorJSONOutput (0.30s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (30.23s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220207192644-6868 --network=
E0207 19:27:00.084681    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/addons-20220207191713-6868/client.crt: no such file or directory
kic_custom_network_test.go:58: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220207192644-6868 --network=: (27.84557066s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20220207192644-6868" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220207192644-6868
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220207192644-6868: (2.346282046s)
--- PASS: TestKicCustomNetwork/create_custom_network (30.23s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (28.8s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220207192714-6868 --network=bridge
kic_custom_network_test.go:58: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220207192714-6868 --network=bridge: (26.528004348s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20220207192714-6868" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220207192714-6868
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220207192714-6868: (2.237236558s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (28.80s)

                                                
                                    
x
+
TestKicExistingNetwork (30.02s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:94: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-20220207192743-6868 --network=existing-network
kic_custom_network_test.go:94: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-20220207192743-6868 --network=existing-network: (27.421719932s)
helpers_test.go:176: Cleaning up "existing-network-20220207192743-6868" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-20220207192743-6868
E0207 19:28:11.853186    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/functional-20220207192144-6868/client.crt: no such file or directory
E0207 19:28:11.858529    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/functional-20220207192144-6868/client.crt: no such file or directory
E0207 19:28:11.868823    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/functional-20220207192144-6868/client.crt: no such file or directory
E0207 19:28:11.889158    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/functional-20220207192144-6868/client.crt: no such file or directory
E0207 19:28:11.929444    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/functional-20220207192144-6868/client.crt: no such file or directory
E0207 19:28:12.009771    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/functional-20220207192144-6868/client.crt: no such file or directory
E0207 19:28:12.170214    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/functional-20220207192144-6868/client.crt: no such file or directory
E0207 19:28:12.490813    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/functional-20220207192144-6868/client.crt: no such file or directory
E0207 19:28:13.131539    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/functional-20220207192144-6868/client.crt: no such file or directory
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-20220207192743-6868: (2.368560289s)
--- PASS: TestKicExistingNetwork (30.02s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-20220207192813-6868 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
E0207 19:28:14.412052    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/functional-20220207192144-6868/client.crt: no such file or directory
E0207 19:28:16.973183    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/functional-20220207192144-6868/client.crt: no such file or directory
mount_start_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-20220207192813-6868 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (5.1197424s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.34s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-20220207192813-6868 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.34s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.67s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220207192813-6868 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
E0207 19:28:22.094080    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/functional-20220207192144-6868/client.crt: no such file or directory
mount_start_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220207192813-6868 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (4.6670012s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.34s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220207192813-6868 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.34s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.79s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:133: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-20220207192813-6868 --alsologtostderr -v=5
pause_test.go:133: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-20220207192813-6868 --alsologtostderr -v=5: (1.785731619s)
--- PASS: TestMountStart/serial/DeleteFirst (1.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.34s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220207192813-6868 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.34s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:156: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-20220207192813-6868
mount_start_test.go:156: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-20220207192813-6868: (1.288483342s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.8s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:167: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220207192813-6868
E0207 19:28:32.334641    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/functional-20220207192144-6868/client.crt: no such file or directory
mount_start_test.go:167: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220207192813-6868: (5.803676362s)
--- PASS: TestMountStart/serial/RestartStopped (6.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.34s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220207192813-6868 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.34s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (75.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220207192838-6868 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0207 19:28:52.815757    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/functional-20220207192144-6868/client.crt: no such file or directory
E0207 19:29:16.238177    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/addons-20220207191713-6868/client.crt: no such file or directory
E0207 19:29:33.776754    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/functional-20220207192144-6868/client.crt: no such file or directory
E0207 19:29:43.925140    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/addons-20220207191713-6868/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220207192838-6868 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m14.91402856s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (75.52s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220207192838-6868 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:491: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220207192838-6868 -- rollout status deployment/busybox
multinode_test.go:491: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20220207192838-6868 -- rollout status deployment/busybox: (2.983062017s)
multinode_test.go:497: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220207192838-6868 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220207192838-6868 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:517: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220207192838-6868 -- exec busybox-7978565885-2pzkl -- nslookup kubernetes.io
multinode_test.go:517: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220207192838-6868 -- exec busybox-7978565885-8k7bp -- nslookup kubernetes.io
multinode_test.go:527: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220207192838-6868 -- exec busybox-7978565885-2pzkl -- nslookup kubernetes.default
multinode_test.go:527: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220207192838-6868 -- exec busybox-7978565885-8k7bp -- nslookup kubernetes.default
multinode_test.go:535: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220207192838-6868 -- exec busybox-7978565885-2pzkl -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:535: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220207192838-6868 -- exec busybox-7978565885-8k7bp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.67s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:545: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220207192838-6868 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:553: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220207192838-6868 -- exec busybox-7978565885-2pzkl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:561: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220207192838-6868 -- exec busybox-7978565885-2pzkl -- sh -c "ping -c 1 192.168.49.1"
multinode_test.go:553: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220207192838-6868 -- exec busybox-7978565885-8k7bp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:561: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220207192838-6868 -- exec busybox-7978565885-8k7bp -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (28.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220207192838-6868 -v 3 --alsologtostderr
E0207 19:30:12.697255    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/ingress-addon-legacy-20220207192354-6868/client.crt: no such file or directory
E0207 19:30:12.702567    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/ingress-addon-legacy-20220207192354-6868/client.crt: no such file or directory
E0207 19:30:12.712840    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/ingress-addon-legacy-20220207192354-6868/client.crt: no such file or directory
E0207 19:30:12.733141    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/ingress-addon-legacy-20220207192354-6868/client.crt: no such file or directory
E0207 19:30:12.774087    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/ingress-addon-legacy-20220207192354-6868/client.crt: no such file or directory
E0207 19:30:12.854385    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/ingress-addon-legacy-20220207192354-6868/client.crt: no such file or directory
E0207 19:30:13.015036    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/ingress-addon-legacy-20220207192354-6868/client.crt: no such file or directory
E0207 19:30:13.336178    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/ingress-addon-legacy-20220207192354-6868/client.crt: no such file or directory
E0207 19:30:13.976882    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/ingress-addon-legacy-20220207192354-6868/client.crt: no such file or directory
E0207 19:30:15.257798    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/ingress-addon-legacy-20220207192354-6868/client.crt: no such file or directory
E0207 19:30:17.818596    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/ingress-addon-legacy-20220207192354-6868/client.crt: no such file or directory
E0207 19:30:22.938837    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/ingress-addon-legacy-20220207192354-6868/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20220207192838-6868 -v 3 --alsologtostderr: (28.10801334s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (28.92s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (12.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 status --output json --alsologtostderr
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 cp testdata/cp-test.txt multinode-20220207192838-6868:/home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 ssh -n multinode-20220207192838-6868 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 cp multinode-20220207192838-6868:/home/docker/cp-test.txt /tmp/mk_cp_test202435189/cp-test_multinode-20220207192838-6868.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 ssh -n multinode-20220207192838-6868 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 cp multinode-20220207192838-6868:/home/docker/cp-test.txt multinode-20220207192838-6868-m02:/home/docker/cp-test_multinode-20220207192838-6868_multinode-20220207192838-6868-m02.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 ssh -n multinode-20220207192838-6868 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 ssh -n multinode-20220207192838-6868-m02 "sudo cat /home/docker/cp-test_multinode-20220207192838-6868_multinode-20220207192838-6868-m02.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 cp multinode-20220207192838-6868:/home/docker/cp-test.txt multinode-20220207192838-6868-m03:/home/docker/cp-test_multinode-20220207192838-6868_multinode-20220207192838-6868-m03.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 ssh -n multinode-20220207192838-6868 "sudo cat /home/docker/cp-test.txt"
E0207 19:30:33.179550    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/ingress-addon-legacy-20220207192354-6868/client.crt: no such file or directory
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 ssh -n multinode-20220207192838-6868-m03 "sudo cat /home/docker/cp-test_multinode-20220207192838-6868_multinode-20220207192838-6868-m03.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 cp testdata/cp-test.txt multinode-20220207192838-6868-m02:/home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 ssh -n multinode-20220207192838-6868-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 cp multinode-20220207192838-6868-m02:/home/docker/cp-test.txt /tmp/mk_cp_test202435189/cp-test_multinode-20220207192838-6868-m02.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 ssh -n multinode-20220207192838-6868-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 cp multinode-20220207192838-6868-m02:/home/docker/cp-test.txt multinode-20220207192838-6868:/home/docker/cp-test_multinode-20220207192838-6868-m02_multinode-20220207192838-6868.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 ssh -n multinode-20220207192838-6868-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 ssh -n multinode-20220207192838-6868 "sudo cat /home/docker/cp-test_multinode-20220207192838-6868-m02_multinode-20220207192838-6868.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 cp multinode-20220207192838-6868-m02:/home/docker/cp-test.txt multinode-20220207192838-6868-m03:/home/docker/cp-test_multinode-20220207192838-6868-m02_multinode-20220207192838-6868-m03.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 ssh -n multinode-20220207192838-6868-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 ssh -n multinode-20220207192838-6868-m03 "sudo cat /home/docker/cp-test_multinode-20220207192838-6868-m02_multinode-20220207192838-6868-m03.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 cp testdata/cp-test.txt multinode-20220207192838-6868-m03:/home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 ssh -n multinode-20220207192838-6868-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 cp multinode-20220207192838-6868-m03:/home/docker/cp-test.txt /tmp/mk_cp_test202435189/cp-test_multinode-20220207192838-6868-m03.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 ssh -n multinode-20220207192838-6868-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 cp multinode-20220207192838-6868-m03:/home/docker/cp-test.txt multinode-20220207192838-6868:/home/docker/cp-test_multinode-20220207192838-6868-m03_multinode-20220207192838-6868.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 ssh -n multinode-20220207192838-6868-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 ssh -n multinode-20220207192838-6868 "sudo cat /home/docker/cp-test_multinode-20220207192838-6868-m03_multinode-20220207192838-6868.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 cp multinode-20220207192838-6868-m03:/home/docker/cp-test.txt multinode-20220207192838-6868-m02:/home/docker/cp-test_multinode-20220207192838-6868-m03_multinode-20220207192838-6868-m02.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 ssh -n multinode-20220207192838-6868-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 ssh -n multinode-20220207192838-6868-m02 "sudo cat /home/docker/cp-test_multinode-20220207192838-6868-m03_multinode-20220207192838-6868-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (12.51s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:215: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 node stop m03
multinode_test.go:215: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220207192838-6868 node stop m03: (1.304692288s)
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 status
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220207192838-6868 status: exit status 7 (653.828162ms)

                                                
                                                
-- stdout --
	multinode-20220207192838-6868
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220207192838-6868-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220207192838-6868-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 status --alsologtostderr
multinode_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220207192838-6868 status --alsologtostderr: exit status 7 (656.364248ms)

                                                
                                                
-- stdout --
	multinode-20220207192838-6868
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220207192838-6868-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220207192838-6868-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0207 19:30:43.408608   92860 out.go:297] Setting OutFile to fd 1 ...
	I0207 19:30:43.408688   92860 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:30:43.408692   92860 out.go:310] Setting ErrFile to fd 2...
	I0207 19:30:43.408695   92860 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:30:43.408817   92860 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/bin
	I0207 19:30:43.409017   92860 out.go:304] Setting JSON to false
	I0207 19:30:43.409034   92860 mustload.go:65] Loading cluster: multinode-20220207192838-6868
	I0207 19:30:43.409380   92860 config.go:176] Loaded profile config "multinode-20220207192838-6868": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 19:30:43.409396   92860 status.go:253] checking status of multinode-20220207192838-6868 ...
	I0207 19:30:43.409784   92860 cli_runner.go:133] Run: docker container inspect multinode-20220207192838-6868 --format={{.State.Status}}
	I0207 19:30:43.444939   92860 status.go:328] multinode-20220207192838-6868 host status = "Running" (err=<nil>)
	I0207 19:30:43.444977   92860 host.go:66] Checking if "multinode-20220207192838-6868" exists ...
	I0207 19:30:43.445265   92860 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220207192838-6868
	I0207 19:30:43.481281   92860 host.go:66] Checking if "multinode-20220207192838-6868" exists ...
	I0207 19:30:43.481555   92860 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0207 19:30:43.481610   92860 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220207192838-6868
	I0207 19:30:43.517547   92860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49212 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/multinode-20220207192838-6868/id_rsa Username:docker}
	I0207 19:30:43.607198   92860 ssh_runner.go:195] Run: systemctl --version
	I0207 19:30:43.611301   92860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0207 19:30:43.620950   92860 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0207 19:30:43.719917   92860 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:48 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-02-07 19:30:43.652577026 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663643648 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0207 19:30:43.720838   92860 kubeconfig.go:92] found "multinode-20220207192838-6868" server: "https://192.168.49.2:8443"
	I0207 19:30:43.720867   92860 api_server.go:165] Checking apiserver status ...
	I0207 19:30:43.720900   92860 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0207 19:30:43.742776   92860 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1719/cgroup
	I0207 19:30:43.750438   92860 api_server.go:181] apiserver freezer: "8:freezer:/docker/256410da11e56b1803538c4f8aa030defb33a06f591044388fa018dffe5bb977/kubepods/burstable/pod4a231f759bd44bd53cda955980b29e04/6c6db29f9095293dee270b7bc3dd51261e1cbe412fb143dafea6c29343e377b8"
	I0207 19:30:43.750516   92860 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/256410da11e56b1803538c4f8aa030defb33a06f591044388fa018dffe5bb977/kubepods/burstable/pod4a231f759bd44bd53cda955980b29e04/6c6db29f9095293dee270b7bc3dd51261e1cbe412fb143dafea6c29343e377b8/freezer.state
	I0207 19:30:43.757627   92860 api_server.go:203] freezer state: "THAWED"
	I0207 19:30:43.757656   92860 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0207 19:30:43.762530   92860 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0207 19:30:43.762558   92860 status.go:419] multinode-20220207192838-6868 apiserver status = Running (err=<nil>)
	I0207 19:30:43.762568   92860 status.go:255] multinode-20220207192838-6868 status: &{Name:multinode-20220207192838-6868 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0207 19:30:43.762589   92860 status.go:253] checking status of multinode-20220207192838-6868-m02 ...
	I0207 19:30:43.762853   92860 cli_runner.go:133] Run: docker container inspect multinode-20220207192838-6868-m02 --format={{.State.Status}}
	I0207 19:30:43.798289   92860 status.go:328] multinode-20220207192838-6868-m02 host status = "Running" (err=<nil>)
	I0207 19:30:43.798359   92860 host.go:66] Checking if "multinode-20220207192838-6868-m02" exists ...
	I0207 19:30:43.798628   92860 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220207192838-6868-m02
	I0207 19:30:43.833218   92860 host.go:66] Checking if "multinode-20220207192838-6868-m02" exists ...
	I0207 19:30:43.833565   92860 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0207 19:30:43.833619   92860 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220207192838-6868-m02
	I0207 19:30:43.867401   92860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49217 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/machines/multinode-20220207192838-6868-m02/id_rsa Username:docker}
	I0207 19:30:43.955116   92860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0207 19:30:43.964862   92860 status.go:255] multinode-20220207192838-6868-m02 status: &{Name:multinode-20220207192838-6868-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0207 19:30:43.964943   92860 status.go:253] checking status of multinode-20220207192838-6868-m03 ...
	I0207 19:30:43.965187   92860 cli_runner.go:133] Run: docker container inspect multinode-20220207192838-6868-m03 --format={{.State.Status}}
	I0207 19:30:43.999927   92860 status.go:328] multinode-20220207192838-6868-m03 host status = "Stopped" (err=<nil>)
	I0207 19:30:43.999950   92860 status.go:341] host is not running, skipping remaining checks
	I0207 19:30:43.999956   92860 status.go:255] multinode-20220207192838-6868-m03 status: &{Name:multinode-20220207192838-6868-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.62s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (25.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:249: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 node start m03 --alsologtostderr
E0207 19:30:53.660370    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/ingress-addon-legacy-20220207192354-6868/client.crt: no such file or directory
E0207 19:30:55.698793    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/functional-20220207192144-6868/client.crt: no such file or directory
multinode_test.go:259: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220207192838-6868 node start m03 --alsologtostderr: (24.381608815s)
multinode_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 status
multinode_test.go:280: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (25.32s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (102.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220207192838-6868
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-20220207192838-6868
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-20220207192838-6868: (22.848095509s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220207192838-6868 --wait=true -v=8 --alsologtostderr
E0207 19:31:34.621179    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/ingress-addon-legacy-20220207192354-6868/client.crt: no such file or directory
multinode_test.go:300: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220207192838-6868 --wait=true -v=8 --alsologtostderr: (1m19.149955056s)
multinode_test.go:305: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220207192838-6868
--- PASS: TestMultiNode/serial/RestartKeepsNodes (102.14s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:399: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 node delete m03
multinode_test.go:399: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220207192838-6868 node delete m03: (4.849292862s)
multinode_test.go:405: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 status --alsologtostderr
E0207 19:32:56.541600    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/ingress-addon-legacy-20220207192354-6868/client.crt: no such file or directory
multinode_test.go:419: (dbg) Run:  docker volume ls
multinode_test.go:429: (dbg) Run:  kubectl get nodes
multinode_test.go:437: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.64s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:319: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 stop
E0207 19:33:11.855660    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/functional-20220207192144-6868/client.crt: no such file or directory
multinode_test.go:319: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220207192838-6868 stop: (21.713861234s)
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 status
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220207192838-6868 status: exit status 7 (142.599251ms)

                                                
                                                
-- stdout --
	multinode-20220207192838-6868
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220207192838-6868-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 status --alsologtostderr
multinode_test.go:332: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220207192838-6868 status --alsologtostderr: exit status 7 (137.538915ms)

                                                
                                                
-- stdout --
	multinode-20220207192838-6868
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220207192838-6868-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0207 19:33:19.018923  106629 out.go:297] Setting OutFile to fd 1 ...
	I0207 19:33:19.019011  106629 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:33:19.019022  106629 out.go:310] Setting ErrFile to fd 2...
	I0207 19:33:19.019030  106629 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0207 19:33:19.019151  106629 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/bin
	I0207 19:33:19.019334  106629 out.go:304] Setting JSON to false
	I0207 19:33:19.019352  106629 mustload.go:65] Loading cluster: multinode-20220207192838-6868
	I0207 19:33:19.019753  106629 config.go:176] Loaded profile config "multinode-20220207192838-6868": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
	I0207 19:33:19.019771  106629 status.go:253] checking status of multinode-20220207192838-6868 ...
	I0207 19:33:19.020169  106629 cli_runner.go:133] Run: docker container inspect multinode-20220207192838-6868 --format={{.State.Status}}
	I0207 19:33:19.055407  106629 status.go:328] multinode-20220207192838-6868 host status = "Stopped" (err=<nil>)
	I0207 19:33:19.055434  106629 status.go:341] host is not running, skipping remaining checks
	I0207 19:33:19.055441  106629 status.go:255] multinode-20220207192838-6868 status: &{Name:multinode-20220207192838-6868 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0207 19:33:19.055471  106629 status.go:253] checking status of multinode-20220207192838-6868-m02 ...
	I0207 19:33:19.055771  106629 cli_runner.go:133] Run: docker container inspect multinode-20220207192838-6868-m02 --format={{.State.Status}}
	I0207 19:33:19.091067  106629 status.go:328] multinode-20220207192838-6868-m02 host status = "Stopped" (err=<nil>)
	I0207 19:33:19.091092  106629 status.go:341] host is not running, skipping remaining checks
	I0207 19:33:19.091098  106629 status.go:255] multinode-20220207192838-6868-m02 status: &{Name:multinode-20220207192838-6868-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.99s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (85.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:349: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:359: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220207192838-6868 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0207 19:33:39.539045    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/functional-20220207192144-6868/client.crt: no such file or directory
E0207 19:34:16.238775    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/addons-20220207191713-6868/client.crt: no such file or directory
multinode_test.go:359: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220207192838-6868 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m24.403327012s)
multinode_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220207192838-6868 status --alsologtostderr
multinode_test.go:379: (dbg) Run:  kubectl get nodes
multinode_test.go:387: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (85.20s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (30.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:448: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220207192838-6868
multinode_test.go:457: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220207192838-6868-m02 --driver=docker  --container-runtime=docker
multinode_test.go:457: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20220207192838-6868-m02 --driver=docker  --container-runtime=docker: exit status 14 (79.357013ms)

                                                
                                                
-- stdout --
	* [multinode-20220207192838-6868-m02] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13439
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220207192838-6868-m02' is duplicated with machine name 'multinode-20220207192838-6868-m02' in profile 'multinode-20220207192838-6868'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220207192838-6868-m03 --driver=docker  --container-runtime=docker
multinode_test.go:465: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220207192838-6868-m03 --driver=docker  --container-runtime=docker: (27.567689674s)
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220207192838-6868
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20220207192838-6868: exit status 80 (370.633804ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220207192838-6868
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220207192838-6868-m03 already exists in multinode-20220207192838-6868-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:477: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20220207192838-6868-m03
E0207 19:35:12.696811    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/ingress-addon-legacy-20220207192354-6868/client.crt: no such file or directory
multinode_test.go:477: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20220207192838-6868-m03: (2.464203488s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (30.55s)

                                                
                                    
x
+
TestPreload (116.56s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220207193519-6868 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.0
E0207 19:35:40.383183    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/ingress-addon-legacy-20220207192354-6868/client.crt: no such file or directory
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220207193519-6868 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.0: (1m22.985956151s)
preload_test.go:62: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220207193519-6868 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:62: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20220207193519-6868 -- docker pull gcr.io/k8s-minikube/busybox: (1.857812158s)
preload_test.go:72: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220207193519-6868 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.3
preload_test.go:72: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220207193519-6868 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.3: (28.900896546s)
preload_test.go:81: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220207193519-6868 -- docker images
helpers_test.go:176: Cleaning up "test-preload-20220207193519-6868" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20220207193519-6868
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20220207193519-6868: (2.448092792s)
--- PASS: TestPreload (116.56s)

                                                
                                    
x
+
TestScheduledStopUnix (99.73s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:129: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20220207193716-6868 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:129: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20220207193716-6868 --memory=2048 --driver=docker  --container-runtime=docker: (26.107311063s)
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220207193716-6868 --schedule 5m
scheduled_stop_test.go:192: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220207193716-6868 -n scheduled-stop-20220207193716-6868
scheduled_stop_test.go:170: signal error was:  <nil>
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220207193716-6868 --schedule 15s
scheduled_stop_test.go:170: signal error was:  os: process already finished
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220207193716-6868 --cancel-scheduled
scheduled_stop_test.go:177: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220207193716-6868 -n scheduled-stop-20220207193716-6868
scheduled_stop_test.go:206: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220207193716-6868
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220207193716-6868 --schedule 15s
E0207 19:38:11.854992    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/functional-20220207192144-6868/client.crt: no such file or directory
scheduled_stop_test.go:170: signal error was:  os: process already finished
scheduled_stop_test.go:206: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220207193716-6868
scheduled_stop_test.go:206: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-20220207193716-6868: exit status 7 (90.939249ms)

                                                
                                                
-- stdout --
	scheduled-stop-20220207193716-6868
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:177: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220207193716-6868 -n scheduled-stop-20220207193716-6868
scheduled_stop_test.go:177: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220207193716-6868 -n scheduled-stop-20220207193716-6868: exit status 7 (93.652983ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:177: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-20220207193716-6868" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20220207193716-6868
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-20220207193716-6868: (1.926341685s)
--- PASS: TestScheduledStopUnix (99.73s)

                                                
                                    
x
+
TestSkaffold (72.59s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:57: (dbg) Run:  /tmp/skaffold.exe814255291 version
skaffold_test.go:61: skaffold version: v1.35.2
skaffold_test.go:64: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-20220207193855-6868 --memory=2600 --driver=docker  --container-runtime=docker
E0207 19:39:16.238108    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/addons-20220207191713-6868/client.crt: no such file or directory
skaffold_test.go:64: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-20220207193855-6868 --memory=2600 --driver=docker  --container-runtime=docker: (27.163533264s)
skaffold_test.go:84: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:108: (dbg) Run:  /tmp/skaffold.exe814255291 run --minikube-profile skaffold-20220207193855-6868 --kube-context skaffold-20220207193855-6868 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:108: (dbg) Done: /tmp/skaffold.exe814255291 run --minikube-profile skaffold-20220207193855-6868 --kube-context skaffold-20220207193855-6868 --status-check=true --port-forward=false --interactive=false: (31.289867123s)
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:343: "leeroy-app-8675b964f6-x9hjr" [d78d0d6a-3e72-4059-aae7-378b9fe8e1be] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-app healthy within 5.011174129s
skaffold_test.go:117: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:343: "leeroy-web-54d94c59fc-rtlpp" [5911af18-b06a-4562-8e46-4ed3fbcd99d0] Running
skaffold_test.go:117: (dbg) TestSkaffold: app=leeroy-web healthy within 5.00631947s
helpers_test.go:176: Cleaning up "skaffold-20220207193855-6868" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-20220207193855-6868
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-20220207193855-6868: (2.637538553s)
--- PASS: TestSkaffold (72.59s)

                                                
                                    
x
+
TestInsufficientStorage (15.26s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-20220207194008-6868 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
E0207 19:40:12.696554    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/ingress-addon-legacy-20220207192354-6868/client.crt: no such file or directory
status_test.go:51: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-20220207194008-6868 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (12.571199346s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"746c6526-222c-42ea-a87f-7f0c823db43a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220207194008-6868] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"906e9b76-4312-444b-9ecd-aaba052deea9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13439"}}
	{"specversion":"1.0","id":"b138256d-d06a-4cbb-ba74-348c8186b5f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1495a402-28b7-41e8-a80d-21707814520d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/kubeconfig"}}
	{"specversion":"1.0","id":"dee96d83-b4b4-4f22-9b77-6dcbf3b1532e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube"}}
	{"specversion":"1.0","id":"b312cee0-ff04-40ac-93f8-610bd7c0c893","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"b241bd1d-287c-4bdd-a2af-b0c9c6305b53","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"3dacd079-e43b-4ed2-8485-2d2eb3056827","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6538f755-a7ac-43ac-8936-0a5f75b878d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"Your cgroup does not allow setting memory."}}
	{"specversion":"1.0","id":"786652e7-f9c8-4678-a3a3-620400447083","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities"}}
	{"specversion":"1.0","id":"a1a70fd0-1919-4091-b5e1-cb755efa257d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220207194008-6868 in cluster insufficient-storage-20220207194008-6868","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f83522c9-cc21-4353-bb33-5cabc6d0f12c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"10d55c90-7ba4-482e-b068-8f7ea430a313","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"9ec62f83-930f-4e6e-b993-edf497ec9da0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:77: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220207194008-6868 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220207194008-6868 --output=json --layout=cluster: exit status 7 (358.921088ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220207194008-6868","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.25.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220207194008-6868","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0207 19:40:21.334697  138901 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220207194008-6868" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/kubeconfig

                                                
                                                
** /stderr **
status_test.go:77: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220207194008-6868 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220207194008-6868 --output=json --layout=cluster: exit status 7 (358.616257ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220207194008-6868","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.25.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220207194008-6868","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0207 19:40:21.694805  139002 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220207194008-6868" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/kubeconfig
	E0207 19:40:21.707109  139002 status.go:557] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/insufficient-storage-20220207194008-6868/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-20220207194008-6868" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-20220207194008-6868
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-20220207194008-6868: (1.971236607s)
--- PASS: TestInsufficientStorage (15.26s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (120.14s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.9.0.1352608693.exe start -p running-upgrade-20220207194157-6868 --memory=2200 --vm-driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /tmp/minikube-v1.9.0.1352608693.exe start -p running-upgrade-20220207194157-6868 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m28.089624941s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20220207194157-6868 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-20220207194157-6868 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (28.567123375s)
helpers_test.go:176: Cleaning up "running-upgrade-20220207194157-6868" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20220207194157-6868

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20220207194157-6868: (2.624905066s)
--- PASS: TestRunningBinaryUpgrade (120.14s)

                                                
                                    
x
+
TestKubernetesUpgrade (181.52s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220207194134-6868 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220207194134-6868 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (52.321481331s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220207194134-6868

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220207194134-6868: (12.679697268s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20220207194134-6868 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20220207194134-6868 status --format={{.Host}}: exit status 7 (131.828152ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220207194134-6868 --memory=2200 --kubernetes-version=v1.23.4-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220207194134-6868 --memory=2200 --kubernetes-version=v1.23.4-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m40.231091489s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220207194134-6868 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220207194134-6868 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220207194134-6868 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker: exit status 106 (83.258095ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220207194134-6868] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13439
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.23.4-rc.0 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20220207194134-6868
	    minikube start -p kubernetes-upgrade-20220207194134-6868 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220207194134-68682 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.23.4-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220207194134-6868 --kubernetes-version=v1.23.4-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220207194134-6868 --memory=2200 --kubernetes-version=v1.23.4-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:282: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220207194134-6868 --memory=2200 --kubernetes-version=v1.23.4-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (13.054034139s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-20220207194134-6868" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220207194134-6868
E0207 19:44:34.900012    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/functional-20220207192144-6868/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220207194134-6868: (2.968425362s)
--- PASS: TestKubernetesUpgrade (181.52s)

                                                
                                    
x
+
TestMissingContainerUpgrade (137.65s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /tmp/minikube-v1.9.1.2214681448.exe start -p missing-upgrade-20220207194023-6868 --memory=2200 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: /tmp/minikube-v1.9.1.2214681448.exe start -p missing-upgrade-20220207194023-6868 --memory=2200 --driver=docker  --container-runtime=docker: (1m8.780642912s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-20220207194023-6868

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20220207194023-6868: (11.114609401s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-20220207194023-6868
version_upgrade_test.go:336: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-20220207194023-6868 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-20220207194023-6868 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (51.864982125s)
helpers_test.go:176: Cleaning up "missing-upgrade-20220207194023-6868" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-20220207194023-6868

                                                
                                                
=== CONT  TestMissingContainerUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-20220207194023-6868: (4.811329452s)
--- PASS: TestMissingContainerUpgrade (137.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:84: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220207194023-6868 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:84: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-20220207194023-6868 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (87.136292ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220207194023-6868] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13439
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (55.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220207194023-6868 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220207194023-6868 --driver=docker  --container-runtime=docker: (54.650712695s)
no_kubernetes_test.go:201: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220207194023-6868 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (55.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (90.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /tmp/minikube-v1.9.0.1332352546.exe start -p stopped-upgrade-20220207194023-6868 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0207 19:40:39.286140    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/addons-20220207191713-6868/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: /tmp/minikube-v1.9.0.1332352546.exe start -p stopped-upgrade-20220207194023-6868 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m0.308267577s)
version_upgrade_test.go:199: (dbg) Run:  /tmp/minikube-v1.9.0.1332352546.exe -p stopped-upgrade-20220207194023-6868 stop
version_upgrade_test.go:199: (dbg) Done: /tmp/minikube-v1.9.0.1332352546.exe -p stopped-upgrade-20220207194023-6868 stop: (2.427857807s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-20220207194023-6868 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-20220207194023-6868 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (27.950582718s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (90.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:113: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220207194023-6868 --no-kubernetes --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:113: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220207194023-6868 --no-kubernetes --driver=docker  --container-runtime=docker: (13.039940045s)
no_kubernetes_test.go:201: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220207194023-6868 status -o json
no_kubernetes_test.go:201: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-20220207194023-6868 status -o json: exit status 2 (498.705669ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220207194023-6868","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:125: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-20220207194023-6868

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:125: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-20220207194023-6868: (2.562037416s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220207194023-6868 --no-kubernetes --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220207194023-6868 --no-kubernetes --driver=docker  --container-runtime=docker: (10.841130609s)
--- PASS: TestNoKubernetes/serial/Start (10.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:148: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220207194023-6868 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:148: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220207194023-6868 "sudo systemctl is-active --quiet service kubelet": exit status 1 (423.897323ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:170: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:180: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:159: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-20220207194023-6868
no_kubernetes_test.go:159: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-20220207194023-6868: (1.35079035s)
--- PASS: TestNoKubernetes/serial/Stop (1.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:192: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220207194023-6868 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:192: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220207194023-6868 --driver=docker  --container-runtime=docker: (6.472562895s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:148: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220207194023-6868 "sudo systemctl is-active --quiet service kubelet"

                                                
                                                
=== CONT  TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:148: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220207194023-6868 "sudo systemctl is-active --quiet service kubelet": exit status 1 (437.698178ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-20220207194023-6868

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-20220207194023-6868: (1.712830216s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.71s)

                                                
                                    
x
+
TestPause/serial/Start (61.84s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220207194246-6868 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0207 19:43:11.853898    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/functional-20220207192144-6868/client.crt: no such file or directory

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220207194246-6868 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m1.840293763s)
--- PASS: TestPause/serial/Start (61.84s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (5.66s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220207194246-6868 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220207194246-6868 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (5.641594082s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (5.66s)

                                                
                                    
x
+
TestPause/serial/Pause (0.84s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:111: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20220207194246-6868 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.84s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.55s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:77: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20220207194246-6868 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20220207194246-6868 --output=json --layout=cluster: exit status 2 (545.840622ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20220207194246-6868","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 16 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.25.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220207194246-6868","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.55s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.85s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:122: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20220207194246-6868 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.85s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.95s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:111: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20220207194246-6868 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.95s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:133: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20220207194246-6868 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/DeletePaused
pause_test.go:133: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20220207194246-6868 --alsologtostderr -v=5: (3.000631672s)
--- PASS: TestPause/serial/DeletePaused (3.00s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.87s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:169: (dbg) Run:  docker ps -a
pause_test.go:174: (dbg) Run:  docker volume inspect pause-20220207194246-6868
pause_test.go:174: (dbg) Non-zero exit: docker volume inspect pause-20220207194246-6868: exit status 1 (43.992975ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20220207194246-6868

                                                
                                                
** /stderr **
pause_test.go:179: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (84.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220207194439-6868 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.3
E0207 19:44:55.753265    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/skaffold-20220207193855-6868/client.crt: no such file or directory
E0207 19:44:55.758536    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/skaffold-20220207193855-6868/client.crt: no such file or directory
E0207 19:44:55.768805    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/skaffold-20220207193855-6868/client.crt: no such file or directory
E0207 19:44:55.789147    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/skaffold-20220207193855-6868/client.crt: no such file or directory
E0207 19:44:55.829597    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/skaffold-20220207193855-6868/client.crt: no such file or directory
E0207 19:44:55.910759    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/skaffold-20220207193855-6868/client.crt: no such file or directory
E0207 19:44:56.071100    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/skaffold-20220207193855-6868/client.crt: no such file or directory
E0207 19:44:56.391720    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/skaffold-20220207193855-6868/client.crt: no such file or directory
E0207 19:44:57.032505    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/skaffold-20220207193855-6868/client.crt: no such file or directory
E0207 19:44:58.313094    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/skaffold-20220207193855-6868/client.crt: no such file or directory
E0207 19:45:00.873723    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/skaffold-20220207193855-6868/client.crt: no such file or directory
E0207 19:45:05.994866    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/skaffold-20220207193855-6868/client.crt: no such file or directory
E0207 19:45:12.696565    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/ingress-addon-legacy-20220207192354-6868/client.crt: no such file or directory
E0207 19:45:16.235395    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/skaffold-20220207193855-6868/client.crt: no such file or directory
E0207 19:45:36.716069    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/skaffold-20220207193855-6868/client.crt: no such file or directory
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20220207194439-6868 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.3: (1m24.577400719s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (84.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20220207194439-6868 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [f7a8cd46-c3ae-450f-a8c8-17175abb6852] Pending
helpers_test.go:343: "busybox" [f7a8cd46-c3ae-450f-a8c8-17175abb6852] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [f7a8cd46-c3ae-450f-a8c8-17175abb6852] Running
start_stop_delete_test.go:181: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.013119134s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20220207194439-6868 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20220207194439-6868 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context embed-certs-20220207194439-6868 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20220207194439-6868 --alsologtostderr -v=3
E0207 19:46:17.677626    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/skaffold-20220207193855-6868/client.crt: no such file or directory
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20220207194439-6868 --alsologtostderr -v=3: (10.872199935s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220207194439-6868 -n embed-certs-20220207194439-6868
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220207194439-6868 -n embed-certs-20220207194439-6868: exit status 7 (98.704502ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20220207194439-6868 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (338.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220207194439-6868 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.3
E0207 19:46:35.743846    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/ingress-addon-legacy-20220207192354-6868/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20220207194439-6868 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.3: (5m37.885828923s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220207194439-6868 -n embed-certs-20220207194439-6868
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (338.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (65.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220207194713-6868 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.4-rc.0
E0207 19:47:39.598716    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/skaffold-20220207193855-6868/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20220207194713-6868 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.4-rc.0: (1m5.814402447s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (65.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (57.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220207194800-6868 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20220207194800-6868 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.3: (57.444170775s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (57.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20220207194713-6868 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [f1196931-19b7-4767-b85a-dde1c5ebdd00] Pending
helpers_test.go:343: "busybox" [f1196931-19b7-4767-b85a-dde1c5ebdd00] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [f1196931-19b7-4767-b85a-dde1c5ebdd00] Running
start_stop_delete_test.go:181: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.014416648s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20220207194713-6868 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20220207194713-6868 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context no-preload-20220207194713-6868 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20220207194713-6868 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20220207194713-6868 --alsologtostderr -v=3: (10.908802701s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220207194713-6868 -n no-preload-20220207194713-6868
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220207194713-6868 -n no-preload-20220207194713-6868: exit status 7 (108.535426ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20220207194713-6868 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (339.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220207194713-6868 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.4-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20220207194713-6868 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.4-rc.0: (5m38.468363992s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220207194713-6868 -n no-preload-20220207194713-6868
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (339.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (8.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20220207194800-6868 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [604ded7b-43b8-467a-a79b-4ffd5ca617eb] Pending
helpers_test.go:343: "busybox" [604ded7b-43b8-467a-a79b-4ffd5ca617eb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [604ded7b-43b8-467a-a79b-4ffd5ca617eb] Running
start_stop_delete_test.go:181: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 8.011956279s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20220207194800-6868 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (8.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20220207194800-6868 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context default-k8s-different-port-20220207194800-6868 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (10.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20220207194800-6868 --alsologtostderr -v=3
E0207 19:49:16.238812    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/addons-20220207191713-6868/client.crt: no such file or directory
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20220207194800-6868 --alsologtostderr -v=3: (10.982360062s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (10.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220207194800-6868 -n default-k8s-different-port-20220207194800-6868
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220207194800-6868 -n default-k8s-different-port-20220207194800-6868: exit status 7 (99.118358ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20220207194800-6868 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (345.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220207194800-6868 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.3
E0207 19:49:55.754011    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/skaffold-20220207193855-6868/client.crt: no such file or directory
E0207 19:50:12.697144    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/ingress-addon-legacy-20220207192354-6868/client.crt: no such file or directory
E0207 19:50:23.439561    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/skaffold-20220207193855-6868/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20220207194800-6868 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.3: (5m44.854846959s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220207194800-6868 -n default-k8s-different-port-20220207194800-6868
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (345.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (315.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220207194436-6868 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20220207194436-6868 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (5m15.202909489s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220207194436-6868 -n old-k8s-version-20220207194436-6868
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (315.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-x72dl" [713a6c1e-f5b5-4224-bad5-6cb21a1475b2] Running
start_stop_delete_test.go:259: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.068064645s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-x72dl" [713a6c1e-f5b5-4224-bad5-6cb21a1475b2] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009479324s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context embed-certs-20220207194439-6868 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20220207194439-6868 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20220207194439-6868 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220207194439-6868 -n embed-certs-20220207194439-6868
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220207194439-6868 -n embed-certs-20220207194439-6868: exit status 2 (411.163525ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220207194439-6868 -n embed-certs-20220207194439-6868
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220207194439-6868 -n embed-certs-20220207194439-6868: exit status 2 (414.353731ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-20220207194439-6868 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220207194439-6868 -n embed-certs-20220207194439-6868
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220207194439-6868 -n embed-certs-20220207194439-6868
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220207195220-6868 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.4-rc.0
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220207195220-6868 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.4-rc.0: (39.304798255s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20220207195220-6868 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:196: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20220207195220-6868 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20220207195220-6868 --alsologtostderr -v=3: (10.904660551s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220207195220-6868 -n newest-cni-20220207195220-6868
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220207195220-6868 -n newest-cni-20220207195220-6868: exit status 7 (97.342493ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20220207195220-6868 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (19.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220207195220-6868 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.4-rc.0
E0207 19:53:11.853218    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/functional-20220207192144-6868/client.crt: no such file or directory
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220207195220-6868 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.4-rc.0: (19.199598399s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220207195220-6868 -n newest-cni-20220207195220-6868
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (19.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:258: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:269: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20220207195220-6868 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20220207195220-6868 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220207195220-6868 -n newest-cni-20220207195220-6868
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220207195220-6868 -n newest-cni-20220207195220-6868: exit status 2 (387.871997ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220207195220-6868 -n newest-cni-20220207195220-6868
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220207195220-6868 -n newest-cni-20220207195220-6868: exit status 2 (382.252217ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-20220207195220-6868 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220207195220-6868 -n newest-cni-20220207195220-6868
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220207195220-6868 -n newest-cni-20220207195220-6868
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (44.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20220207194241-6868 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=docker
E0207 19:54:16.237972    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/addons-20220207191713-6868/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p auto-20220207194241-6868 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=docker: (44.353018299s)
--- PASS: TestNetworkPlugins/group/auto/Start (44.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-xq2jb" [7a2af56f-6889-45e2-98ed-c26dde561cbc] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-xq2jb" [7a2af56f-6889-45e2-98ed-c26dde561cbc] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.014539267s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20220207194241-6868 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context auto-20220207194241-6868 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-tpxvn" [c01dd6fc-5da9-4321-bb27-0309f9e2228f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
helpers_test.go:343: "netcat-668db85669-tpxvn" [c01dd6fc-5da9-4321-bb27-0309f9e2228f] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.008457139s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-xq2jb" [7a2af56f-6889-45e2-98ed-c26dde561cbc] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006797217s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context no-preload-20220207194713-6868 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:163: (dbg) Run:  kubectl --context auto-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:182: (dbg) Run:  kubectl --context auto-20220207194241-6868 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:232: (dbg) Run:  kubectl --context auto-20220207194241-6868 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/HairPin
net_test.go:232: (dbg) Non-zero exit: kubectl --context auto-20220207194241-6868 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.162530453s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-20220207194713-6868 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-20220207194713-6868 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220207194713-6868 -n no-preload-20220207194713-6868
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220207194713-6868 -n no-preload-20220207194713-6868: exit status 2 (461.82895ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220207194713-6868 -n no-preload-20220207194713-6868
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220207194713-6868 -n no-preload-20220207194713-6868: exit status 2 (480.675401ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-20220207194713-6868 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220207194713-6868 -n no-preload-20220207194713-6868
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220207194713-6868 -n no-preload-20220207194713-6868
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (51.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p false-20220207194241-6868 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p false-20220207194241-6868 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker  --container-runtime=docker: (51.898551384s)
--- PASS: TestNetworkPlugins/group/false/Start (51.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (91.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20220207194241-6868 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=docker
E0207 19:54:55.753658    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/skaffold-20220207193855-6868/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20220207194241-6868 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=docker: (1m31.625751775s)
--- PASS: TestNetworkPlugins/group/cilium/Start (91.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-nkp4k" [9c43d1d4-290d-40fd-8f3b-a71efdc14c33] Running
start_stop_delete_test.go:259: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.015151446s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-nkp4k" [9c43d1d4-290d-40fd-8f3b-a71efdc14c33] Running
E0207 19:55:12.696659    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/ingress-addon-legacy-20220207192354-6868/client.crt: no such file or directory
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008227081s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20220207194800-6868 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-different-port-20220207194800-6868 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (3.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-different-port-20220207194800-6868 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220207194800-6868 -n default-k8s-different-port-20220207194800-6868
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220207194800-6868 -n default-k8s-different-port-20220207194800-6868: exit status 2 (489.267004ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220207194800-6868 -n default-k8s-different-port-20220207194800-6868
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220207194800-6868 -n default-k8s-different-port-20220207194800-6868: exit status 2 (477.114976ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-different-port-20220207194800-6868 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220207194800-6868 -n default-k8s-different-port-20220207194800-6868
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220207194800-6868 -n default-k8s-different-port-20220207194800-6868
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (3.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-20220207194241-6868 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context false-20220207194241-6868 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-ggdsn" [edaa29db-e257-4587-b31f-77d182e64d45] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-ggdsn" [edaa29db-e257-4587-b31f-77d182e64d45] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.019451464s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:163: (dbg) Run:  kubectl --context false-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:182: (dbg) Run:  kubectl --context false-20220207194241-6868 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:232: (dbg) Run:  kubectl --context false-20220207194241-6868 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:232: (dbg) Non-zero exit: kubectl --context false-20220207194241-6868 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.180158022s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:343: "cilium-bzw2j" [037f8304-533b-40bc-8d95-fd50f0e01939] Running
net_test.go:107: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.016044038s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20220207194241-6868 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (12.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context cilium-20220207194241-6868 replace --force -f testdata/netcat-deployment.yaml
net_test.go:132: (dbg) Done: kubectl --context cilium-20220207194241-6868 replace --force -f testdata/netcat-deployment.yaml: (1.087712237s)
net_test.go:146: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-cmczs" [d637b93a-f3ba-44fd-ab66-8d24ec584ac5] Pending
helpers_test.go:343: "netcat-668db85669-cmczs" [d637b93a-f3ba-44fd-ab66-8d24ec584ac5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-cmczs" [d637b93a-f3ba-44fd-ab66-8d24ec584ac5] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 11.006967591s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (12.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-766959b846-zlhv4" [f0db3a87-8ae8-4dd4-84a3-5346029885c6] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011959879s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:163: (dbg) Run:  kubectl --context cilium-20220207194241-6868 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:182: (dbg) Run:  kubectl --context cilium-20220207194241-6868 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:232: (dbg) Run:  kubectl --context cilium-20220207194241-6868 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (8.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-766959b846-zlhv4" [f0db3a87-8ae8-4dd4-84a3-5346029885c6] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.110261461s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20220207194436-6868 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Done: kubectl --context old-k8s-version-20220207194436-6868 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (3.403882635s)
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (8.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (44.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20220207194241-6868 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20220207194241-6868 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=docker: (44.796353563s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (44.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20220207194436-6868 "sudo crictl images -o json"
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20220207194241-6868 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context enable-default-cni-20220207194241-6868 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-6scdh" [ccf93396-d31a-44cf-8e0e-c5ae62db6db5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-6scdh" [ccf93396-d31a-44cf-8e0e-c5ae62db6db5] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.01092574s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (290.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20220207194241-6868 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=docker
E0207 20:01:52.206044    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/old-k8s-version-20220207194436-6868/client.crt: no such file or directory
E0207 20:01:57.095012    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/cilium-20220207194241-6868/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20220207194241-6868 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=docker: (4m50.994840077s)
--- PASS: TestNetworkPlugins/group/bridge/Start (290.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (41.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-20220207194241-6868 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0207 20:03:47.009153    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/no-preload-20220207194713-6868/client.crt: no such file or directory
E0207 20:03:57.956903    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/default-k8s-different-port-20220207194800-6868/client.crt: no such file or directory
E0207 20:03:59.976627    6868 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-13439-3505-75a63be3c8ed71f3c9522a3bb940f2ceca2e7fcb/.minikube/profiles/cilium-20220207194241-6868/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-20220207194241-6868 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (41.709571333s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (41.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-20220207194241-6868 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context kubenet-20220207194241-6868 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-st2tp" [bbee168a-ca32-4f04-bc3e-2add944b155a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
helpers_test.go:343: "netcat-668db85669-st2tp" [bbee168a-ca32-4f04-bc3e-2add944b155a] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.00757934s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20220207194241-6868 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context bridge-20220207194241-6868 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-gxn62" [0e5a692a-57ef-4a3c-8706-b20921d6ca74] Pending
helpers_test.go:343: "netcat-668db85669-gxn62" [0e5a692a-57ef-4a3c-8706-b20921d6ca74] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-gxn62" [0e5a692a-57ef-4a3c-8706-b20921d6ca74] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.006571249s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.25s)

                                                
                                    

Test skip (21/279)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:123: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:142: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:158: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3/cached-images
aaa_download_only_test.go:123: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3/binaries
aaa_download_only_test.go:142: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3/kubectl
aaa_download_only_test.go:158: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.23.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.4-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.4-rc.0/cached-images
aaa_download_only_test.go:123: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.4-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.4-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.4-rc.0/binaries
aaa_download_only_test.go:142: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.4-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.4-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.4-rc.0/kubectl
aaa_download_only_test.go:158: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.23.4-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:449: Skipping Olm addon till images are fixed
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:187: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:35: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:39: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:43: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-20220207194712-6868" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20220207194712-6868
--- SKIP: TestStartStop/group/disable-driver-mounts (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:77: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:176: Cleaning up "flannel-20220207194241-6868" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p flannel-20220207194241-6868
--- SKIP: TestNetworkPlugins/group/flannel (0.48s)

                                                
                                    
Copied to clipboard