Test Report: Docker_Linux_containerd 13251

                    
                      c4800a61159ffc3ce43d26d0a2acbbe0889dab73:2022-01-27:22409
                    
                

Test fail (6/289)

x
+
TestRunningBinaryUpgrade (95.92s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.16.0.3648144420.exe start -p running-upgrade-20220127031538-6703 --memory=2200 --vm-driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /tmp/minikube-v1.16.0.3648144420.exe start -p running-upgrade-20220127031538-6703 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (56.792626443s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20220127031538-6703 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-20220127031538-6703 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 81 (32.788270392s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220127031538-6703] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13251
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Kubernetes 1.23.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.2
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node running-upgrade-20220127031538-6703 in cluster running-upgrade-20220127031538-6703
	* Pulling base image ...
	* Updating the running docker "running-upgrade-20220127031538-6703" container ...
	* Preparing Kubernetes v1.20.0 on containerd 1.4.3 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 03:16:35.819749  162215 out.go:297] Setting OutFile to fd 1 ...
	I0127 03:16:35.819939  162215 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0127 03:16:35.819966  162215 out.go:310] Setting ErrFile to fd 2...
	I0127 03:16:35.819981  162215 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0127 03:16:35.820190  162215 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/bin
	I0127 03:16:35.821201  162215 out.go:304] Setting JSON to false
	I0127 03:16:35.822835  162215 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3550,"bootTime":1643249846,"procs":596,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1028-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 03:16:35.822921  162215 start.go:122] virtualization: kvm guest
	I0127 03:16:35.825204  162215 out.go:176] * [running-upgrade-20220127031538-6703] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	I0127 03:16:35.826947  162215 out.go:176]   - MINIKUBE_LOCATION=13251
	I0127 03:16:35.825412  162215 notify.go:174] Checking for updates...
	I0127 03:16:35.829030  162215 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 03:16:35.831959  162215 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
	I0127 03:16:35.833528  162215 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube
	I0127 03:16:35.836136  162215 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 03:16:35.836736  162215 config.go:176] Loaded profile config "running-upgrade-20220127031538-6703": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0127 03:16:35.838774  162215 out.go:176] * Kubernetes 1.23.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.2
	I0127 03:16:35.838811  162215 driver.go:344] Setting default libvirt URI to qemu:///system
	I0127 03:16:35.909561  162215 docker.go:132] docker version: linux-20.10.12
	I0127 03:16:35.909680  162215 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0127 03:16:36.068853  162215 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:true NGoroutines:57 SystemTime:2022-01-27 03:16:35.955547359 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1028-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668890624 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0127 03:16:36.068964  162215 docker.go:237] overlay module found
	I0127 03:16:36.071599  162215 out.go:176] * Using the docker driver based on existing profile
	I0127 03:16:36.071628  162215 start.go:281] selected driver: docker
	I0127 03:16:36.071650  162215 start.go:798] validating driver "docker" against &{Name:running-upgrade-20220127031538-6703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:running-upgrade-20220127031538-6703 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.48 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false}
	I0127 03:16:36.071758  162215 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0127 03:16:36.071791  162215 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0127 03:16:36.071810  162215 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0127 03:16:36.077711  162215 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0127 03:16:36.078361  162215 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0127 03:16:36.213112  162215 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:true NGoroutines:66 SystemTime:2022-01-27 03:16:36.120037128 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1028-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668890624 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	W0127 03:16:36.213277  162215 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0127 03:16:36.213307  162215 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0127 03:16:36.215327  162215 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0127 03:16:36.215442  162215 cni.go:93] Creating CNI manager for ""
	I0127 03:16:36.215462  162215 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0127 03:16:36.215481  162215 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0127 03:16:36.215490  162215 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0127 03:16:36.215499  162215 start_flags.go:302] config:
	{Name:running-upgrade-20220127031538-6703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:running-upgrade-20220127031538-6703 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.48 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false}
	I0127 03:16:36.217500  162215 out.go:176] * Starting control plane node running-upgrade-20220127031538-6703 in cluster running-upgrade-20220127031538-6703
	I0127 03:16:36.217551  162215 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0127 03:16:36.218982  162215 out.go:176] * Pulling base image ...
	I0127 03:16:36.219023  162215 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0127 03:16:36.219058  162215 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0127 03:16:36.219070  162215 cache.go:57] Caching tarball of preloaded images
	I0127 03:16:36.219128  162215 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 in local docker daemon
	I0127 03:16:36.219324  162215 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.20.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 03:16:36.219336  162215 cache.go:60] Finished verifying existence of preloaded tar for  v1.20.0 on containerd
	I0127 03:16:36.219481  162215 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703/config.json ...
	I0127 03:16:36.270518  162215 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 in local docker daemon, skipping pull
	I0127 03:16:36.270551  162215 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 exists in daemon, skipping load
	I0127 03:16:36.270572  162215 cache.go:208] Successfully downloaded all kic artifacts
	I0127 03:16:36.270623  162215 start.go:313] acquiring machines lock for running-upgrade-20220127031538-6703: {Name:mkf1071b6262cf9755f15f8d9325a911dc32dfe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 03:16:36.270721  162215 start.go:317] acquired machines lock for "running-upgrade-20220127031538-6703" in 71.162µs
	I0127 03:16:36.270757  162215 start.go:93] Skipping create...Using existing machine configuration
	I0127 03:16:36.270768  162215 fix.go:55] fixHost starting: 
	I0127 03:16:36.271056  162215 cli_runner.go:133] Run: docker container inspect running-upgrade-20220127031538-6703 --format={{.State.Status}}
	I0127 03:16:36.310104  162215 fix.go:108] recreateIfNeeded on running-upgrade-20220127031538-6703: state=Running err=<nil>
	W0127 03:16:36.310134  162215 fix.go:134] unexpected machine state, will restart: <nil>
	I0127 03:16:36.313029  162215 out.go:176] * Updating the running docker "running-upgrade-20220127031538-6703" container ...
	I0127 03:16:36.313080  162215 machine.go:88] provisioning docker machine ...
	I0127 03:16:36.313104  162215 ubuntu.go:169] provisioning hostname "running-upgrade-20220127031538-6703"
	I0127 03:16:36.313161  162215 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220127031538-6703
	I0127 03:16:36.357843  162215 main.go:130] libmachine: Using SSH client type: native
	I0127 03:16:36.358054  162215 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil>  [] 0s} 127.0.0.1 49332 <nil> <nil>}
	I0127 03:16:36.358078  162215 main.go:130] libmachine: About to run SSH command:
	sudo hostname running-upgrade-20220127031538-6703 && echo "running-upgrade-20220127031538-6703" | sudo tee /etc/hostname
	I0127 03:16:36.496859  162215 main.go:130] libmachine: SSH cmd err, output: <nil>: running-upgrade-20220127031538-6703
	
	I0127 03:16:36.496934  162215 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220127031538-6703
	I0127 03:16:36.545848  162215 main.go:130] libmachine: Using SSH client type: native
	I0127 03:16:36.546027  162215 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil>  [] 0s} 127.0.0.1 49332 <nil> <nil>}
	I0127 03:16:36.546060  162215 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-20220127031538-6703' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-20220127031538-6703/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-20220127031538-6703' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 03:16:36.674975  162215 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0127 03:16:36.675000  162215 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube}
	I0127 03:16:36.675032  162215 ubuntu.go:177] setting up certificates
	I0127 03:16:36.675041  162215 provision.go:83] configureAuth start
	I0127 03:16:36.675085  162215 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-20220127031538-6703
	I0127 03:16:36.717780  162215 provision.go:138] copyHostCerts
	I0127 03:16:36.717873  162215 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.pem, removing ...
	I0127 03:16:36.717900  162215 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.pem
	I0127 03:16:36.717972  162215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.pem (1078 bytes)
	I0127 03:16:36.718112  162215 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cert.pem, removing ...
	I0127 03:16:36.718137  162215 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cert.pem
	I0127 03:16:36.718168  162215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cert.pem (1123 bytes)
	I0127 03:16:36.718259  162215 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/key.pem, removing ...
	I0127 03:16:36.718270  162215 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/key.pem
	I0127 03:16:36.718295  162215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/key.pem (1675 bytes)
	I0127 03:16:36.718384  162215 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-20220127031538-6703 san=[192.168.59.48 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-20220127031538-6703]
	I0127 03:16:36.901757  162215 provision.go:172] copyRemoteCerts
	I0127 03:16:36.901825  162215 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 03:16:36.901899  162215 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220127031538-6703
	I0127 03:16:36.946556  162215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49332 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/running-upgrade-20220127031538-6703/id_rsa Username:docker}
	I0127 03:16:37.039828  162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 03:16:37.059790  162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0127 03:16:37.077985  162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 03:16:37.095662  162215 provision.go:86] duration metric: configureAuth took 420.607228ms
	I0127 03:16:37.095728  162215 ubuntu.go:193] setting minikube options for container-runtime
	I0127 03:16:37.095950  162215 config.go:176] Loaded profile config "running-upgrade-20220127031538-6703": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0127 03:16:37.095969  162215 machine.go:91] provisioned docker machine in 782.877543ms
	I0127 03:16:37.095979  162215 start.go:267] post-start starting for "running-upgrade-20220127031538-6703" (driver="docker")
	I0127 03:16:37.096009  162215 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 03:16:37.096057  162215 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 03:16:37.096106  162215 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220127031538-6703
	I0127 03:16:37.136145  162215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49332 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/running-upgrade-20220127031538-6703/id_rsa Username:docker}
	I0127 03:16:37.226055  162215 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 03:16:37.229270  162215 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0127 03:16:37.229308  162215 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0127 03:16:37.229321  162215 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0127 03:16:37.229327  162215 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0127 03:16:37.229338  162215 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/addons for local assets ...
	I0127 03:16:37.229401  162215 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files for local assets ...
	I0127 03:16:37.229495  162215 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/67032.pem -> 67032.pem in /etc/ssl/certs
	I0127 03:16:37.229599  162215 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 03:16:37.238159  162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/67032.pem --> /etc/ssl/certs/67032.pem (1708 bytes)
	I0127 03:16:37.258270  162215 start.go:270] post-start completed in 162.254477ms
	I0127 03:16:37.258334  162215 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 03:16:37.258369  162215 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220127031538-6703
	I0127 03:16:37.293726  162215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49332 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/running-upgrade-20220127031538-6703/id_rsa Username:docker}
	I0127 03:16:37.383499  162215 fix.go:57] fixHost completed within 1.112724956s
	I0127 03:16:37.383548  162215 start.go:80] releasing machines lock for "running-upgrade-20220127031538-6703", held for 1.112806432s
	I0127 03:16:37.383654  162215 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-20220127031538-6703
	I0127 03:16:37.418044  162215 ssh_runner.go:195] Run: systemctl --version
	I0127 03:16:37.418094  162215 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220127031538-6703
	I0127 03:16:37.418107  162215 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0127 03:16:37.418170  162215 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220127031538-6703
	I0127 03:16:37.454791  162215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49332 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/running-upgrade-20220127031538-6703/id_rsa Username:docker}
	I0127 03:16:37.465223  162215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49332 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/running-upgrade-20220127031538-6703/id_rsa Username:docker}
	I0127 03:16:37.551241  162215 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 03:16:37.573143  162215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 03:16:37.581870  162215 docker.go:183] disabling docker service ...
	I0127 03:16:37.581924  162215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 03:16:37.597745  162215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 03:16:37.606600  162215 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 03:16:37.698698  162215 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 03:16:37.774854  162215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 03:16:37.784029  162215 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 03:16:37.799704  162215 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgICBzdHJlYW1
fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My4yIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10
KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9
kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0127 03:16:37.814976  162215 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 03:16:37.821033  162215 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 03:16:37.827506  162215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:16:37.916718  162215 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 03:16:38.045659  162215 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0127 03:16:38.045738  162215 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 03:16:38.050251  162215 start.go:462] Will wait 60s for crictl version
	I0127 03:16:38.050318  162215 ssh_runner.go:195] Run: sudo crictl version
	I0127 03:16:38.078541  162215 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-01-27T03:16:38Z" level=fatal msg="getting the runtime version failed: rpc error: code = Unknown desc = server is not initialized yet"
	I0127 03:16:49.127192  162215 ssh_runner.go:195] Run: sudo crictl version
	I0127 03:16:49.142397  162215 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.3
	RuntimeApiVersion:  v1alpha2
	I0127 03:16:49.142463  162215 ssh_runner.go:195] Run: containerd --version
	I0127 03:16:49.171121  162215 ssh_runner.go:195] Run: containerd --version
	I0127 03:16:49.234691  162215 out.go:176] * Preparing Kubernetes v1.20.0 on containerd 1.4.3 ...
	I0127 03:16:49.234764  162215 cli_runner.go:133] Run: docker network inspect running-upgrade-20220127031538-6703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0127 03:16:49.273789  162215 ssh_runner.go:195] Run: grep 192.168.59.1	host.minikube.internal$ /etc/hosts
	I0127 03:16:49.305738  162215 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0127 03:16:49.305821  162215 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0127 03:16:49.305896  162215 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 03:16:49.324115  162215 containerd.go:608] couldn't find preloaded image for "gcr.io/k8s-minikube/storage-provisioner:v5". assuming images are not preloaded.
	I0127 03:16:49.324182  162215 ssh_runner.go:195] Run: which lz4
	I0127 03:16:49.328061  162215 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 03:16:49.331530  162215 ssh_runner.go:352] existence check for /preloaded.tar.lz4: source file and destination file are different sizes
	I0127 03:16:49.331587  162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.20.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (582465074 bytes)
	I0127 03:16:52.346824  162215 containerd.go:555] Took 3.018789 seconds to copy over tarball
	I0127 03:16:52.346914  162215 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 03:17:03.570053  162215 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (11.223111505s)
	I0127 03:17:03.570082  162215 containerd.go:562] Took 11.223225 seconds t extract the tarball
	I0127 03:17:03.570093  162215 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 03:17:03.660335  162215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:17:03.871641  162215 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 03:17:04.105253  162215 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 03:17:04.124625  162215 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.20.0 k8s.gcr.io/kube-controller-manager:v1.20.0 k8s.gcr.io/kube-scheduler:v1.20.0 k8s.gcr.io/kube-proxy:v1.20.0 k8s.gcr.io/pause:3.2 k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/kubernetesui/dashboard:v2.3.1 docker.io/kubernetesui/metrics-scraper:v1.0.7]
	I0127 03:17:04.124734  162215 image.go:134] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.7
	I0127 03:17:04.124936  162215 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.20.0
	I0127 03:17:04.125040  162215 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.20.0
	I0127 03:17:04.125150  162215 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.20.0
	I0127 03:17:04.125243  162215 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.20.0
	I0127 03:17:04.125465  162215 image.go:134] retrieving image: k8s.gcr.io/pause:3.2
	I0127 03:17:04.125633  162215 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.13-0
	I0127 03:17:04.125741  162215 image.go:134] retrieving image: k8s.gcr.io/coredns:1.7.0
	I0127 03:17:04.125832  162215 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 03:17:04.125921  162215 image.go:134] retrieving image: docker.io/kubernetesui/dashboard:v2.3.1
	I0127 03:17:04.127466  162215 image.go:180] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: reference does not exist
	I0127 03:17:04.127977  162215 image.go:180] daemon lookup for docker.io/kubernetesui/dashboard:v2.3.1: Error response from daemon: reference does not exist
	I0127 03:17:04.128109  162215 image.go:180] daemon lookup for docker.io/kubernetesui/metrics-scraper:v1.0.7: Error response from daemon: reference does not exist
	I0127 03:17:04.128136  162215 image.go:180] daemon lookup for k8s.gcr.io/etcd:3.4.13-0: Error response from daemon: reference does not exist
	I0127 03:17:04.128278  162215 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.20.0: Error response from daemon: reference does not exist
	I0127 03:17:04.128413  162215 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.20.0: Error response from daemon: reference does not exist
	I0127 03:17:04.128440  162215 image.go:180] daemon lookup for k8s.gcr.io/pause:3.2: Error response from daemon: reference does not exist
	I0127 03:17:04.128545  162215 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.20.0: Error response from daemon: reference does not exist
	I0127 03:17:04.128565  162215 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.20.0: Error response from daemon: reference does not exist
	I0127 03:17:04.128669  162215 image.go:180] daemon lookup for k8s.gcr.io/coredns:1.7.0: Error response from daemon: reference does not exist
	I0127 03:17:04.418344  162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.20.0"
	I0127 03:17:04.419139  162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.20.0"
	I0127 03:17:04.424426  162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.4.13-0"
	I0127 03:17:04.427553  162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns:1.7.0"
	I0127 03:17:04.443893  162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.20.0"
	I0127 03:17:04.444493  162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.20.0"
	I0127 03:17:04.465910  162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I0127 03:17:04.509961  162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.2"
	I0127 03:17:05.015592  162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/metrics-scraper:v1.0.7"
	I0127 03:17:05.022587  162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/dashboard:v2.3.1"
	I0127 03:17:05.309789  162215 cache_images.go:123] Successfully loaded all cached images
	I0127 03:17:05.309814  162215 cache_images.go:92] LoadImages completed in 1.185159822s
	I0127 03:17:05.309874  162215 ssh_runner.go:195] Run: sudo crictl info
	I0127 03:17:05.330052  162215 cni.go:93] Creating CNI manager for ""
	I0127 03:17:05.330072  162215 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0127 03:17:05.330083  162215 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0127 03:17:05.330095  162215 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.59.48 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-20220127031538-6703 NodeName:running-upgrade-20220127031538-6703 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.59.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.59.48 CgroupDriver:cgrou
pfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0127 03:17:05.330263  162215 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.59.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "running-upgrade-20220127031538-6703"
	  kubeletExtraArgs:
	    node-ip: 192.168.59.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.59.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 03:17:05.330371  162215 kubeadm.go:791] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=running-upgrade-20220127031538-6703 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.59.48 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:running-upgrade-20220127031538-6703 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0127 03:17:05.330429  162215 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0127 03:17:05.339921  162215 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 03:17:05.339997  162215 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 03:17:05.347885  162215 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (581 bytes)
	I0127 03:17:05.363235  162215 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 03:17:05.408320  162215 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2088 bytes)
	I0127 03:17:05.432119  162215 ssh_runner.go:195] Run: grep 192.168.59.48	control-plane.minikube.internal$ /etc/hosts
	I0127 03:17:05.436673  162215 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703 for IP: 192.168.59.48
	I0127 03:17:05.436796  162215 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.key
	I0127 03:17:05.436850  162215 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/proxy-client-ca.key
	I0127 03:17:05.436973  162215 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703/client.key
	I0127 03:17:05.437053  162215 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703/apiserver.key.fc40ab25
	I0127 03:17:05.437109  162215 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703/proxy-client.key
	I0127 03:17:05.437225  162215 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/6703.pem (1338 bytes)
	W0127 03:17:05.437268  162215 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/6703_empty.pem, impossibly tiny 0 bytes
	I0127 03:17:05.437284  162215 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 03:17:05.437316  162215 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem (1078 bytes)
	I0127 03:17:05.437342  162215 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/cert.pem (1123 bytes)
	I0127 03:17:05.437364  162215 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/key.pem (1675 bytes)
	I0127 03:17:05.437417  162215 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/67032.pem (1708 bytes)
	I0127 03:17:05.438513  162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0127 03:17:05.461440  162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 03:17:05.516402  162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 03:17:05.537977  162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 03:17:05.561369  162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 03:17:05.629987  162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 03:17:05.657572  162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 03:17:05.724754  162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 03:17:05.808160  162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/67032.pem --> /usr/share/ca-certificates/67032.pem (1708 bytes)
	I0127 03:17:05.914815  162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 03:17:05.937255  162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/6703.pem --> /usr/share/ca-certificates/6703.pem (1338 bytes)
	I0127 03:17:05.960381  162215 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 03:17:06.036900  162215 ssh_runner.go:195] Run: openssl version
	I0127 03:17:06.042924  162215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67032.pem && ln -fs /usr/share/ca-certificates/67032.pem /etc/ssl/certs/67032.pem"
	I0127 03:17:06.064802  162215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67032.pem
	I0127 03:17:06.068353  162215 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 27 02:47 /usr/share/ca-certificates/67032.pem
	I0127 03:17:06.068400  162215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67032.pem
	I0127 03:17:06.074066  162215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67032.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 03:17:06.104715  162215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 03:17:06.112573  162215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:17:06.115981  162215 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 27 02:42 /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:17:06.116036  162215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:17:06.120843  162215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 03:17:06.127821  162215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6703.pem && ln -fs /usr/share/ca-certificates/6703.pem /etc/ssl/certs/6703.pem"
	I0127 03:17:06.136027  162215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6703.pem
	I0127 03:17:06.139199  162215 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 27 02:47 /usr/share/ca-certificates/6703.pem
	I0127 03:17:06.139245  162215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6703.pem
	I0127 03:17:06.144267  162215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6703.pem /etc/ssl/certs/51391683.0"
	I0127 03:17:06.151276  162215 kubeadm.go:388] StartCluster: {Name:running-upgrade-20220127031538-6703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:running-upgrade-20220127031538-6703 Namespace:default APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.48 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false}
	I0127 03:17:06.151365  162215 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 03:17:06.151395  162215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 03:17:06.167608  162215 cri.go:87] found id: ""
	I0127 03:17:06.167655  162215 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 03:17:06.207586  162215 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 03:17:06.215687  162215 kubeadm.go:124] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 03:17:06.216403  162215 kubeconfig.go:116] verify returned: extract IP: "running-upgrade-20220127031538-6703" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
	I0127 03:17:06.216619  162215 kubeconfig.go:127] "running-upgrade-20220127031538-6703" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig - will repair!
	I0127 03:17:06.217195  162215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig: {Name:mk52def711e0760588c8e7c9e046110fe006e484 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:17:06.241403  162215 kapi.go:59] client config for running-upgrade-20220127031538-6703: &rest.Config{Host:"https://192.168.59.48:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/runn
ing-upgrade-20220127031538-6703/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x15da7e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0127 03:17:06.243260  162215 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 03:17:06.251876  162215 kubeadm.go:593] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2022-01-27 03:16:10.898540450 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2022-01-27 03:17:05.423671678 +0000
	@@ -65,4 +65,10 @@
	 apiVersion: kubeproxy.config.k8s.io/v1alpha1
	 kind: KubeProxyConfiguration
	 clusterCIDR: "10.244.0.0/16"
	-metricsBindAddress: 192.168.59.48:10249
	+metricsBindAddress: 0.0.0.0:10249
	+conntrack:
	+  maxPerCore: 0
	+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	+  tcpEstablishedTimeout: 0s
	+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	+  tcpCloseWaitTimeout: 0s
	
	-- /stdout --
	I0127 03:17:06.251922  162215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 03:17:06.932139  162215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 03:17:06.943350  162215 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 03:17:06.959832  162215 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I0127 03:17:06.959882  162215 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 03:17:06.968121  162215 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 03:17:06.968171  162215 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	W0127 03:17:07.446011  162215 out.go:241] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.11.0-1028-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.11.0-1028-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-8443]: Port 8443 is in use
		[ERROR Port-10259]: Port 10259 is in use
		[ERROR Port-10257]: Port 10257 is in use
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.11.0-1028-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.11.0-1028-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-8443]: Port 8443 is in use
		[ERROR Port-10259]: Port 10259 is in use
		[ERROR Port-10257]: Port 10257 is in use
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 03:17:07.446055  162215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 03:17:07.512319  162215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 03:17:07.522160  162215 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I0127 03:17:07.522213  162215 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 03:17:07.529324  162215 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 03:17:07.529370  162215 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0127 03:17:07.690870  162215 kubeadm.go:390] StartCluster complete in 1.539598545s
	I0127 03:17:07.690939  162215 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:17:07.690986  162215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:17:07.704611  162215 cri.go:87] found id: ""
	I0127 03:17:07.704637  162215 logs.go:274] 0 containers: []
	W0127 03:17:07.704645  162215 logs.go:276] No container was found matching "kube-apiserver"
	I0127 03:17:07.704668  162215 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0127 03:17:07.704745  162215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:17:07.717933  162215 cri.go:87] found id: ""
	I0127 03:17:07.717961  162215 logs.go:274] 0 containers: []
	W0127 03:17:07.717971  162215 logs.go:276] No container was found matching "etcd"
	I0127 03:17:07.717979  162215 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0127 03:17:07.718026  162215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:17:07.731059  162215 cri.go:87] found id: ""
	I0127 03:17:07.731079  162215 logs.go:274] 0 containers: []
	W0127 03:17:07.731085  162215 logs.go:276] No container was found matching "coredns"
	I0127 03:17:07.731090  162215 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:17:07.731152  162215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:17:07.745381  162215 cri.go:87] found id: ""
	I0127 03:17:07.745402  162215 logs.go:274] 0 containers: []
	W0127 03:17:07.745408  162215 logs.go:276] No container was found matching "kube-scheduler"
	I0127 03:17:07.745417  162215 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:17:07.745455  162215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:17:07.762094  162215 cri.go:87] found id: ""
	I0127 03:17:07.762125  162215 logs.go:274] 0 containers: []
	W0127 03:17:07.762133  162215 logs.go:276] No container was found matching "kube-proxy"
	I0127 03:17:07.762142  162215 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:17:07.762183  162215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:17:07.775553  162215 cri.go:87] found id: ""
	I0127 03:17:07.775580  162215 logs.go:274] 0 containers: []
	W0127 03:17:07.775586  162215 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0127 03:17:07.775591  162215 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0127 03:17:07.775638  162215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 03:17:07.789738  162215 cri.go:87] found id: ""
	I0127 03:17:07.789766  162215 logs.go:274] 0 containers: []
	W0127 03:17:07.789774  162215 logs.go:276] No container was found matching "storage-provisioner"
	I0127 03:17:07.789782  162215 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:17:07.789830  162215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:17:07.803044  162215 cri.go:87] found id: ""
	I0127 03:17:07.803071  162215 logs.go:274] 0 containers: []
	W0127 03:17:07.803078  162215 logs.go:276] No container was found matching "kube-controller-manager"
	I0127 03:17:07.803086  162215 logs.go:123] Gathering logs for kubelet ...
	I0127 03:17:07.803117  162215 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:17:07.895271  162215 logs.go:123] Gathering logs for dmesg ...
	I0127 03:17:07.895305  162215 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:17:07.915018  162215 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:17:07.915058  162215 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 03:17:08.197863  162215 logs.go:123] Gathering logs for containerd ...
	I0127 03:17:08.197892  162215 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0127 03:17:08.257172  162215 logs.go:123] Gathering logs for container status ...
	I0127 03:17:08.257213  162215 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0127 03:17:08.275508  162215 out.go:370] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.11.0-1028-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.11.0-1028-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-8443]: Port 8443 is in use
		[ERROR Port-10259]: Port 10259 is in use
		[ERROR Port-10257]: Port 10257 is in use
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	W0127 03:17:08.275548  162215 out.go:241] * 
	* 
	W0127 03:17:08.275688  162215 out.go:241] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.11.0-1028-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.11.0-1028-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-8443]: Port 8443 is in use
		[ERROR Port-10259]: Port 10259 is in use
		[ERROR Port-10257]: Port 10257 is in use
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.11.0-1028-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.11.0-1028-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-8443]: Port 8443 is in use
		[ERROR Port-10259]: Port 10259 is in use
		[ERROR Port-10257]: Port 10257 is in use
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 03:17:08.275702  162215 out.go:241] * 
	* 
	W0127 03:17:08.276469  162215 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 03:17:08.396612  162215 out.go:176] 
	W0127 03:17:08.396806  162215 out.go:241] X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.11.0-1028-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.11.0-1028-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-8443]: Port 8443 is in use
		[ERROR Port-10259]: Port 10259 is in use
		[ERROR Port-10257]: Port 10257 is in use
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.11.0-1028-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.11.0-1028-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-8443]: Port 8443 is in use
		[ERROR Port-10259]: Port 10259 is in use
		[ERROR Port-10257]: Port 10257 is in use
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 03:17:08.396919  162215 out.go:241] * Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
	* Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
	W0127 03:17:08.396990  162215 out.go:241] * Related issue: https://github.com/kubernetes/minikube/issues/5484
	* Related issue: https://github.com/kubernetes/minikube/issues/5484
	I0127 03:17:08.516534  162215 out.go:176] 

                                                
                                                
** /stderr **
version_upgrade_test.go:139: upgrade from v1.16.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-20220127031538-6703 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 81
panic.go:642: *** TestRunningBinaryUpgrade FAILED at 2022-01-27 03:17:08.555545289 +0000 UTC m=+2113.291999322
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect running-upgrade-20220127031538-6703
helpers_test.go:236: (dbg) docker inspect running-upgrade-20220127031538-6703:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0cb3caba465d90ba761139b096e38c77136ee387b2b35553313a894675937411",
	        "Created": "2022-01-27T03:15:46.866746988Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 152601,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-01-27T03:15:47.358745002Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:06db6ca724463f987019154e0475424113315da76733d5b67f90e35719d46c4d",
	        "ResolvConfPath": "/var/lib/docker/containers/0cb3caba465d90ba761139b096e38c77136ee387b2b35553313a894675937411/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0cb3caba465d90ba761139b096e38c77136ee387b2b35553313a894675937411/hostname",
	        "HostsPath": "/var/lib/docker/containers/0cb3caba465d90ba761139b096e38c77136ee387b2b35553313a894675937411/hosts",
	        "LogPath": "/var/lib/docker/containers/0cb3caba465d90ba761139b096e38c77136ee387b2b35553313a894675937411/0cb3caba465d90ba761139b096e38c77136ee387b2b35553313a894675937411-json.log",
	        "Name": "/running-upgrade-20220127031538-6703",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-20220127031538-6703:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "running-upgrade-20220127031538-6703",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b7fa8ba6ff656bada769e1e60c3267ede44cf41f11cdb46e8a8c6e3b71f2b6fd-init/diff:/var/lib/docker/overlay2/48a6afa5e0a9516ce4dc1f5459b529e8154283097947fb2da9335c65368c5887/diff:/var/lib/docker/overlay2/6dbceb9cc216ca99567fbf9a5bf1fc96d700503aa51960c28af3924c1efb03c7/diff:/var/lib/docker/overlay2/a86b843001e156f716be5143b37e71ed5928e4e10d99bed21cf3773483ea17c5/diff:/var/lib/docker/overlay2/3eac27006d6f92241d2f42ba10eca71a5d0f90648fa7c4aa9ae73b759f6df770/diff:/var/lib/docker/overlay2/d15ced90c80ff8677732f9d26eb292c2a4e1545c26588d4544070b458501653c/diff:/var/lib/docker/overlay2/45645fdb8a2923b75e5b012368356441bb80262933cb1e2bcb8ffe658b8f9a45/diff:/var/lib/docker/overlay2/ac170811b40c36c35c27809bd81b8c23ce8661ddd82d1947f98485abff72bd4b/diff:/var/lib/docker/overlay2/8efa8472f00aa9bedc29412758a3b398d87ea0dc92476662a2e0344c46c663b9/diff:/var/lib/docker/overlay2/1164683f8b88eae06c95e6b3804f088d2035c727df3e9c456b05044372ee383d/diff:/var/lib/docker/overlay2/740b74
b4b91e2781b4e6d8521c4da1d332c4916d7d79383b77c1a2ddab8ccd2e/diff:/var/lib/docker/overlay2/ea509413497cba005bd19f179302c5f08d095f1c5c9a3bbfbb21850e19e3390c/diff:/var/lib/docker/overlay2/3c66ffb89b0b641c530714389ea38e6de8efeda792c328693cfc2194c3193b60/diff:/var/lib/docker/overlay2/5207a1c75b52f7376eda1627ba66a9240792a3fa96d186014d0d02d9adf57e9c/diff:/var/lib/docker/overlay2/c6eba072681c5d8947855204f792a0030cec1970639e088b12a99d23512cf8e3/diff:/var/lib/docker/overlay2/f1ae6aa616c8e759801078bd2bf4dfff76a2418756948c43bada9f1c0484860c/diff:/var/lib/docker/overlay2/97545af48f6dc52660e45e0dae9d498defbd2c20715fd9dc74c7ce304ba67043/diff:/var/lib/docker/overlay2/1941873d8cc5ec600b1f778c22cda64d688bd48ff81f335f7727860c8e718072/diff:/var/lib/docker/overlay2/b03d5c7215d2284a9c22cca30cdd66f610c8f3842b6bf8c1e4225305acc1eb39/diff:/var/lib/docker/overlay2/a857bd38deffdc9de25ba148039b1a3d4aca58d09ee4c67de63ec29d7a83bb9d/diff:/var/lib/docker/overlay2/c45a1482c32587e155ef7e149ea399b10ab07a265579d73d7730a4c3d4847cc5/diff:/var/lib/d
ocker/overlay2/15565ecef5e596a7aad526fb8d7e925b284014279fcb4226f560c1b8ad45ad35/diff:/var/lib/docker/overlay2/202a1c7df018d3dd5942d52372ccef41da658966eb357ad69f6608f3b027d321/diff:/var/lib/docker/overlay2/59b70058325819e20b0bebbc70846cf1fcbe95ea576cc28cffc14a82a9402ca4/diff:/var/lib/docker/overlay2/1230ef6cb66210a5a62715c30556d5f9251e81d7cee67df68886be81910c7db6/diff:/var/lib/docker/overlay2/46b452e38aae1d4280f874acff6cdacdde65a9d1785a0de0af581b218d3a2b26/diff:/var/lib/docker/overlay2/0a29c1731383192b00674d178bd378566a561c251e78a10f2de04721db311879/diff:/var/lib/docker/overlay2/7758341c3a0ab19235e017f7e88be25f37e1e2a263508513aaccd553cc6fb880/diff:/var/lib/docker/overlay2/42c9967b3df8c320f21f059303cc152fcc0583228cc80106572855ae7fbb87ae/diff:/var/lib/docker/overlay2/a2f0d15380d2fb22943e2441b85052c87e6cae06d9ebd665ecab557dc71e355f/diff:/var/lib/docker/overlay2/71af46fa98e949cffe4368e1331392f4fa3a1ac9bb236678c6ea9ea99ad637aa/diff:/var/lib/docker/overlay2/80be004ea477d9004f0ea34e691d11dcdccdb2e607fdbae42afa4486e72
676db/diff:/var/lib/docker/overlay2/c77a99aeb6fea504fe63df31ba6bcdbba041a5e911f9e53fa3ac5ff6e3656895/diff:/var/lib/docker/overlay2/11124e797e5aaba260680c1fb60457fa47062fb5868fad94b44d706c4a449ab0/diff:/var/lib/docker/overlay2/619cb3f0df642cae9ac698b34b205f59683496e42a3a232e0cc09baada9272d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b7fa8ba6ff656bada769e1e60c3267ede44cf41f11cdb46e8a8c6e3b71f2b6fd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b7fa8ba6ff656bada769e1e60c3267ede44cf41f11cdb46e8a8c6e3b71f2b6fd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b7fa8ba6ff656bada769e1e60c3267ede44cf41f11cdb46e8a8c6e3b71f2b6fd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-20220127031538-6703",
	                "Source": "/var/lib/docker/volumes/running-upgrade-20220127031538-6703/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-20220127031538-6703",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-20220127031538-6703",
	                "name.minikube.sigs.k8s.io": "running-upgrade-20220127031538-6703",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "501cbd82eb0b223dfe8e2ffb54957946ede035476125e2fb4385688067488a76",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49332"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49331"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49330"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49329"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/501cbd82eb0b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "running-upgrade-20220127031538-6703": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.59.48"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0cb3caba465d",
	                        "running-upgrade-20220127031538-6703"
	                    ],
	                    "NetworkID": "71ffccd776eccfd25cebeec3d1662202d2e6983928d0b58895669eb96579526e",
	                    "EndpointID": "71bc31de786f2fd0a2f5e2011547b7e79b80286df8b81f4e764b67f0b37b1ac4",
	                    "Gateway": "192.168.59.1",
	                    "IPAddress": "192.168.59.48",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3b:30",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-20220127031538-6703 -n running-upgrade-20220127031538-6703
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-20220127031538-6703 -n running-upgrade-20220127031538-6703: exit status 2 (425.081478ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p running-upgrade-20220127031538-6703 logs -n 25
E0127 03:17:09.641906    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220127025035-6703/client.crt: no such file or directory

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p running-upgrade-20220127031538-6703 logs -n 25: (1.788056316s)
helpers_test.go:253: TestRunningBinaryUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                  Args                  |                Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                     | NoKubernetes-20220127031151-6703       | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:13:20 UTC | Thu, 27 Jan 2022 03:13:29 UTC |
	|         | NoKubernetes-20220127031151-6703       |                                        |         |         |                               |                               |
	|         | --no-kubernetes --driver=docker        |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| profile | list                                   | minikube                               | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:13:30 UTC | Thu, 27 Jan 2022 03:13:30 UTC |
	| profile | list --output=json                     | minikube                               | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:13:31 UTC | Thu, 27 Jan 2022 03:13:31 UTC |
	| stop    | -p                                     | NoKubernetes-20220127031151-6703       | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:13:31 UTC | Thu, 27 Jan 2022 03:13:33 UTC |
	|         | NoKubernetes-20220127031151-6703       |                                        |         |         |                               |                               |
	| start   | -p                                     | NoKubernetes-20220127031151-6703       | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:13:33 UTC | Thu, 27 Jan 2022 03:13:38 UTC |
	|         | NoKubernetes-20220127031151-6703       |                                        |         |         |                               |                               |
	|         | --driver=docker                        |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| delete  | -p                                     | NoKubernetes-20220127031151-6703       | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:13:39 UTC | Thu, 27 Jan 2022 03:13:42 UTC |
	|         | NoKubernetes-20220127031151-6703       |                                        |         |         |                               |                               |
	| start   | -p                                     | kubernetes-upgrade-20220127031320-6703 | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:13:20 UTC | Thu, 27 Jan 2022 03:14:26 UTC |
	|         | kubernetes-upgrade-20220127031320-6703 |                                        |         |         |                               |                               |
	|         | --memory=2200                          |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.16.0           |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker |                                        |         |         |                               |                               |
	|         |  --container-runtime=containerd        |                                        |         |         |                               |                               |
	| stop    | -p                                     | kubernetes-upgrade-20220127031320-6703 | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:14:26 UTC | Thu, 27 Jan 2022 03:14:52 UTC |
	|         | kubernetes-upgrade-20220127031320-6703 |                                        |         |         |                               |                               |
	| start   | -p                                     | missing-upgrade-20220127031307-6703    | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:14:26 UTC | Thu, 27 Jan 2022 03:15:33 UTC |
	|         | missing-upgrade-20220127031307-6703    |                                        |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr        |                                        |         |         |                               |                               |
	|         | -v=1 --driver=docker                   |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| start   | -p                                     | stopped-upgrade-20220127031342-6703    | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:14:22 UTC | Thu, 27 Jan 2022 03:15:37 UTC |
	|         | stopped-upgrade-20220127031342-6703    |                                        |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr        |                                        |         |         |                               |                               |
	|         | -v=1 --driver=docker                   |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| logs    | -p                                     | stopped-upgrade-20220127031342-6703    | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:15:37 UTC | Thu, 27 Jan 2022 03:15:38 UTC |
	|         | stopped-upgrade-20220127031342-6703    |                                        |         |         |                               |                               |
	| delete  | -p                                     | missing-upgrade-20220127031307-6703    | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:15:33 UTC | Thu, 27 Jan 2022 03:15:38 UTC |
	|         | missing-upgrade-20220127031307-6703    |                                        |         |         |                               |                               |
	| delete  | -p                                     | stopped-upgrade-20220127031342-6703    | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:15:38 UTC | Thu, 27 Jan 2022 03:15:41 UTC |
	|         | stopped-upgrade-20220127031342-6703    |                                        |         |         |                               |                               |
	| start   | -p                                     | kubernetes-upgrade-20220127031320-6703 | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:14:52 UTC | Thu, 27 Jan 2022 03:16:01 UTC |
	|         | kubernetes-upgrade-20220127031320-6703 |                                        |         |         |                               |                               |
	|         | --memory=2200                          |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.3-rc.0      |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker |                                        |         |         |                               |                               |
	|         |  --container-runtime=containerd        |                                        |         |         |                               |                               |
	| start   | -p                                     | cert-expiration-20220127031151-6703    | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:16:02 UTC | Thu, 27 Jan 2022 03:16:18 UTC |
	|         | cert-expiration-20220127031151-6703    |                                        |         |         |                               |                               |
	|         | --memory=2048                          |                                        |         |         |                               |                               |
	|         | --cert-expiration=8760h                |                                        |         |         |                               |                               |
	|         | --driver=docker                        |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| delete  | -p                                     | cert-expiration-20220127031151-6703    | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:16:19 UTC | Thu, 27 Jan 2022 03:16:22 UTC |
	|         | cert-expiration-20220127031151-6703    |                                        |         |         |                               |                               |
	| delete  | -p kubenet-20220127031622-6703         | kubenet-20220127031622-6703            | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:16:22 UTC | Thu, 27 Jan 2022 03:16:23 UTC |
	| delete  | -p flannel-20220127031623-6703         | flannel-20220127031623-6703            | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:16:23 UTC | Thu, 27 Jan 2022 03:16:23 UTC |
	| delete  | -p false-20220127031623-6703           | false-20220127031623-6703              | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:16:24 UTC | Thu, 27 Jan 2022 03:16:24 UTC |
	| start   | -p pause-20220127031541-6703           | pause-20220127031541-6703              | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:15:41 UTC | Thu, 27 Jan 2022 03:16:45 UTC |
	|         | --memory=2048                          |                                        |         |         |                               |                               |
	|         | --install-addons=false                 |                                        |         |         |                               |                               |
	|         | --wait=all --driver=docker             |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| start   | -p                                     | kubernetes-upgrade-20220127031320-6703 | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:16:02 UTC | Thu, 27 Jan 2022 03:16:47 UTC |
	|         | kubernetes-upgrade-20220127031320-6703 |                                        |         |         |                               |                               |
	|         | --memory=2200                          |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.3-rc.0      |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker |                                        |         |         |                               |                               |
	|         |  --container-runtime=containerd        |                                        |         |         |                               |                               |
	| delete  | -p                                     | kubernetes-upgrade-20220127031320-6703 | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:16:48 UTC | Thu, 27 Jan 2022 03:16:51 UTC |
	|         | kubernetes-upgrade-20220127031320-6703 |                                        |         |         |                               |                               |
	| start   | -p pause-20220127031541-6703           | pause-20220127031541-6703              | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:16:45 UTC | Thu, 27 Jan 2022 03:17:02 UTC |
	|         | --alsologtostderr                      |                                        |         |         |                               |                               |
	|         | -v=1 --driver=docker                   |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| pause   | -p pause-20220127031541-6703           | pause-20220127031541-6703              | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:17:02 UTC | Thu, 27 Jan 2022 03:17:03 UTC |
	|         | --alsologtostderr -v=5                 |                                        |         |         |                               |                               |
	| unpause | -p pause-20220127031541-6703           | pause-20220127031541-6703              | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:17:04 UTC | Thu, 27 Jan 2022 03:17:04 UTC |
	|         | --alsologtostderr -v=5                 |                                        |         |         |                               |                               |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/01/27 03:16:55
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.17.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 03:16:55.321171  165385 out.go:297] Setting OutFile to fd 1 ...
	I0127 03:16:55.321267  165385 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0127 03:16:55.321270  165385 out.go:310] Setting ErrFile to fd 2...
	I0127 03:16:55.321274  165385 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0127 03:16:55.321417  165385 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/bin
	I0127 03:16:55.321825  165385 out.go:304] Setting JSON to false
	I0127 03:16:55.323662  165385 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3570,"bootTime":1643249846,"procs":600,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1028-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 03:16:55.323747  165385 start.go:122] virtualization: kvm guest
	I0127 03:16:55.457507  165385 out.go:176] * [cert-options-20220127031655-6703] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	I0127 03:16:55.457731  165385 notify.go:174] Checking for updates...
	I0127 03:16:55.557841  165385 out.go:176]   - MINIKUBE_LOCATION=13251
	I0127 03:16:55.658093  165385 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 03:16:55.705301  165385 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
	I0127 03:16:52.346824  162215 containerd.go:555] Took 3.018789 seconds to copy over tarball
	I0127 03:16:52.346914  162215 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 03:16:55.797337  165385 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube
	I0127 03:16:55.928582  165385 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 03:16:55.929331  165385 config.go:176] Loaded profile config "force-systemd-flag-20220127031624-6703": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.2
	I0127 03:16:55.929474  165385 config.go:176] Loaded profile config "pause-20220127031541-6703": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.2
	I0127 03:16:55.929604  165385 config.go:176] Loaded profile config "running-upgrade-20220127031538-6703": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0127 03:16:55.929654  165385 driver.go:344] Setting default libvirt URI to qemu:///system
	I0127 03:16:55.978302  165385 docker.go:132] docker version: linux-20.10.12
	I0127 03:16:55.978418  165385 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0127 03:16:56.080312  165385 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-01-27 03:16:56.009417453 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1028-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668890624 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0127 03:16:56.080412  165385 docker.go:237] overlay module found
	I0127 03:16:56.231287  165385 out.go:176] * Using the docker driver based on user configuration
	I0127 03:16:56.231324  165385 start.go:281] selected driver: docker
	I0127 03:16:56.231331  165385 start.go:798] validating driver "docker" against <nil>
	I0127 03:16:56.231355  165385 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0127 03:16:56.231439  165385 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0127 03:16:56.231462  165385 out.go:241] ! Your cgroup does not allow setting memory.
	I0127 03:16:56.330803  165385 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0127 03:16:56.331784  165385 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0127 03:16:56.429478  165385 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-01-27 03:16:56.367138586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1028-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668890624 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0127 03:16:56.429661  165385 start_flags.go:288] no existing cluster config was found, will generate one from the flags 
	I0127 03:16:56.429898  165385 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	I0127 03:16:56.429926  165385 start_flags.go:813] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 03:16:56.429948  165385 cni.go:93] Creating CNI manager for ""
	I0127 03:16:56.429963  165385 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0127 03:16:56.429977  165385 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0127 03:16:56.429984  165385 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0127 03:16:56.429991  165385 start_flags.go:297] Found "CNI" CNI - setting NetworkPlugin=cni
	I0127 03:16:56.430002  165385 start_flags.go:302] config:
	{Name:cert-options-20220127031655-6703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:cert-options-20220127031655-6703 Namespace:default APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSD
omain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8555 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0127 03:16:56.499919  165385 out.go:176] * Starting control plane node cert-options-20220127031655-6703 in cluster cert-options-20220127031655-6703
	I0127 03:16:56.499982  165385 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0127 03:16:56.560884  165385 out.go:176] * Pulling base image ...
	I0127 03:16:56.560931  165385 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime containerd
	I0127 03:16:56.560982  165385 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-containerd-overlay2-amd64.tar.lz4
	I0127 03:16:56.560990  165385 cache.go:57] Caching tarball of preloaded images
	I0127 03:16:56.561024  165385 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I0127 03:16:56.561247  165385 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 03:16:56.561259  165385 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.2 on containerd
	I0127 03:16:56.561422  165385 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/cert-options-20220127031655-6703/config.json ...
	I0127 03:16:56.561445  165385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/cert-options-20220127031655-6703/config.json: {Name:mk46d752be84fefc029f06754d1b0613d9d4a329 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:16:56.597158  165385 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I0127 03:16:56.597178  165385 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I0127 03:16:56.597186  165385 cache.go:208] Successfully downloaded all kic artifacts
	I0127 03:16:56.597215  165385 start.go:313] acquiring machines lock for cert-options-20220127031655-6703: {Name:mk33ad0ba81ca90eb57c18f82e5e773f16dd5558 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 03:16:56.597328  165385 start.go:317] acquired machines lock for "cert-options-20220127031655-6703" in 100.268µs
	I0127 03:16:56.597348  165385 start.go:89] Provisioning new machine with config: &{Name:cert-options-20220127031655-6703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:cert-options-20220127031655-6703 Namespace:default APIServerName:minikubeCA API
ServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8555 NodeName:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.23.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8555 KubernetesVersion:v1.23.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 03:16:56.597408  165385 start.go:126] createHost starting for "" (driver="docker")
	I0127 03:16:59.143205  163929 ssh_runner.go:195] Run: sudo crictl version
	I0127 03:16:59.168000  163929 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.12
	RuntimeApiVersion:  v1alpha2
	I0127 03:16:59.168055  163929 ssh_runner.go:195] Run: containerd --version
	I0127 03:16:59.186157  163929 ssh_runner.go:195] Run: containerd --version
	I0127 03:16:59.302578  163929 out.go:176] * Preparing Kubernetes v1.23.2 on containerd 1.4.12 ...
	I0127 03:16:59.302681  163929 cli_runner.go:133] Run: docker network inspect pause-20220127031541-6703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0127 03:16:59.348724  163929 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0127 03:16:59.399212  163929 out.go:176]   - kubelet.housekeeping-interval=5m
	I0127 03:16:59.435211  163929 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0127 03:16:59.435322  163929 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime containerd
	I0127 03:16:59.435391  163929 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 03:16:59.462310  163929 containerd.go:612] all images are preloaded for containerd runtime.
	I0127 03:16:59.462335  163929 containerd.go:526] Images already preloaded, skipping extraction
	I0127 03:16:59.462377  163929 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 03:16:59.486838  163929 containerd.go:612] all images are preloaded for containerd runtime.
	I0127 03:16:59.486862  163929 cache_images.go:84] Images are preloaded, skipping loading
	I0127 03:16:59.486913  163929 ssh_runner.go:195] Run: sudo crictl info
	I0127 03:16:59.513759  163929 cni.go:93] Creating CNI manager for ""
	I0127 03:16:59.513785  163929 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0127 03:16:59.513813  163929 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0127 03:16:59.513830  163929 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20220127031541-6703 NodeName:pause-20220127031541-6703 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/l
ib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0127 03:16:59.514018  163929 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "pause-20220127031541-6703"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 03:16:59.514122  163929 kubeadm.go:791] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=pause-20220127031541-6703 --housekeeping-interval=5m --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.2 ClusterName:pause-20220127031541-6703 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0127 03:16:59.514194  163929 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.2
	I0127 03:16:59.523550  163929 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 03:16:59.523636  163929 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 03:16:59.532828  163929 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (597 bytes)
	I0127 03:16:59.548791  163929 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 03:16:59.561414  163929 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2054 bytes)
	I0127 03:16:59.573976  163929 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0127 03:16:59.576816  163929 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/pause-20220127031541-6703 for IP: 192.168.67.2
	I0127 03:16:59.576904  163929 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.key
	I0127 03:16:59.576938  163929 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/proxy-client-ca.key
	I0127 03:16:59.577011  163929 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/pause-20220127031541-6703/client.key
	I0127 03:16:59.577078  163929 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/pause-20220127031541-6703/apiserver.key.c7fa3a9e
	I0127 03:16:59.577134  163929 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/pause-20220127031541-6703/proxy-client.key
	I0127 03:16:59.577241  163929 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/6703.pem (1338 bytes)
	W0127 03:16:59.577277  163929 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/6703_empty.pem, impossibly tiny 0 bytes
	I0127 03:16:59.577294  163929 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 03:16:59.577333  163929 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem (1078 bytes)
	I0127 03:16:59.577364  163929 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/cert.pem (1123 bytes)
	I0127 03:16:59.577403  163929 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/key.pem (1675 bytes)
	I0127 03:16:59.577453  163929 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/67032.pem (1708 bytes)
	I0127 03:16:59.578391  163929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/pause-20220127031541-6703/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0127 03:16:59.594769  163929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/pause-20220127031541-6703/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 03:16:59.761796  163929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/pause-20220127031541-6703/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 03:16:59.782293  163929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/pause-20220127031541-6703/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 03:16:59.800687  163929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 03:16:59.826545  163929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 03:16:59.851809  163929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 03:16:59.870503  163929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 03:16:59.887991  163929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/67032.pem --> /usr/share/ca-certificates/67032.pem (1708 bytes)
	I0127 03:16:59.905075  163929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 03:16:59.927010  163929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/6703.pem --> /usr/share/ca-certificates/6703.pem (1338 bytes)
	I0127 03:16:59.949845  163929 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 03:16:59.964096  163929 ssh_runner.go:195] Run: openssl version
	I0127 03:16:59.968788  163929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67032.pem && ln -fs /usr/share/ca-certificates/67032.pem /etc/ssl/certs/67032.pem"
	I0127 03:16:59.976180  163929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67032.pem
	I0127 03:16:59.979511  163929 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 27 02:47 /usr/share/ca-certificates/67032.pem
	I0127 03:16:59.979561  163929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67032.pem
	I0127 03:16:59.984198  163929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67032.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 03:16:59.990730  163929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 03:16:59.997606  163929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:17:00.000453  163929 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 27 02:42 /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:17:00.000493  163929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:17:00.005810  163929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 03:17:00.014660  163929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6703.pem && ln -fs /usr/share/ca-certificates/6703.pem /etc/ssl/certs/6703.pem"
	I0127 03:17:00.024955  163929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6703.pem
	I0127 03:17:00.029138  163929 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 27 02:47 /usr/share/ca-certificates/6703.pem
	I0127 03:17:00.029184  163929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6703.pem
	I0127 03:17:00.036022  163929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6703.pem /etc/ssl/certs/51391683.0"
	I0127 03:17:00.045298  163929 kubeadm.go:388] StartCluster: {Name:pause-20220127031541-6703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:pause-20220127031541-6703 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0127 03:17:00.045378  163929 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 03:17:00.045471  163929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 03:17:00.074162  163929 cri.go:87] found id: "62b37530a83efcdbc97267529625fe4196f4fd8226687e4695f9b0299eea2846"
	I0127 03:17:00.074191  163929 cri.go:87] found id: "886e22ea0e68c7cc285e2aa75fe090255638e01b7e8d164c4aae08f322528822"
	I0127 03:17:00.074197  163929 cri.go:87] found id: "1a2313a515d37a8bf98dff383afc9ca3e24f14cc675df33d8bdff8ddc9db275f"
	I0127 03:17:00.074201  163929 cri.go:87] found id: "a009159daa09e5f390fb09e4e8f88530528b0c8630aa629642bac57e67ebd9c6"
	I0127 03:17:00.074207  163929 cri.go:87] found id: "0a04e5a0eca444f6a23750b80b5b84302571637fe91ecd03a04b5a511e35c8f7"
	I0127 03:17:00.074214  163929 cri.go:87] found id: "48c3e32b20f6e9d49fb080085cee5a973df6945da3263ea7851445fd8ac6060f"
	I0127 03:17:00.074220  163929 cri.go:87] found id: "eb36e8411c4da7b98212be67c7eb03b5d2015c1495376038fd0a1b4b2bbb389f"
	I0127 03:17:00.074232  163929 cri.go:87] found id: ""
	I0127 03:17:00.074279  163929 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0127 03:17:00.109187  163929 cri.go:114] JSON = [{"ociVersion":"1.0.2-dev","id":"0a04e5a0eca444f6a23750b80b5b84302571637fe91ecd03a04b5a511e35c8f7","pid":1149,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a04e5a0eca444f6a23750b80b5b84302571637fe91ecd03a04b5a511e35c8f7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a04e5a0eca444f6a23750b80b5b84302571637fe91ecd03a04b5a511e35c8f7/rootfs","created":"2022-01-27T03:16:20.43724908Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"6b73cef3f45943fa1d85f5d44fe24cee72c38afa4faf17a7f242ff7fdb367643"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1176ff589fabea8d6dd7fc71637bc92847a742ea004dce5caa8b71d13ef5f465","pid":2042,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1176ff589fabea8d6dd7fc71637bc92847a742ea004dce5caa8b71d13ef5f465","rootfs":"/run/containerd/io.containerd.runtime.v2.tas
k/k8s.io/1176ff589fabea8d6dd7fc71637bc92847a742ea004dce5caa8b71d13ef5f465/rootfs","created":"2022-01-27T03:16:43.277907757Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1176ff589fabea8d6dd7fc71637bc92847a742ea004dce5caa8b71d13ef5f465","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-64897985d-p2l5j_89d03314-65b0-43ef-85a5-898223c9a84b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1a2313a515d37a8bf98dff383afc9ca3e24f14cc675df33d8bdff8ddc9db275f","pid":1783,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a2313a515d37a8bf98dff383afc9ca3e24f14cc675df33d8bdff8ddc9db275f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a2313a515d37a8bf98dff383afc9ca3e24f14cc675df33d8bdff8ddc9db275f/rootfs","created":"2022-01-27T03:16:41.087444511Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"a2679d06f09b6ae
a2e735d247ca7aed40a6ef702e4ddde33c1370a730dae5d67"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2fdea377e1da5d385ae80e959ce719eed69072342752f692d5210ddf1bcfb777","pid":1745,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2fdea377e1da5d385ae80e959ce719eed69072342752f692d5210ddf1bcfb777","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2fdea377e1da5d385ae80e959ce719eed69072342752f692d5210ddf1bcfb777/rootfs","created":"2022-01-27T03:16:41.015841481Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"2fdea377e1da5d385ae80e959ce719eed69072342752f692d5210ddf1bcfb777","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-pkggr_767e367d-723c-45b9-bfbb-0cac37e69288"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"48c3e32b20f6e9d49fb080085cee5a973df6945da3263ea7851445fd8ac6060f","pid":1186,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/48c3e32b20f6e9d49fb080085cee5a973
df6945da3263ea7851445fd8ac6060f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/48c3e32b20f6e9d49fb080085cee5a973df6945da3263ea7851445fd8ac6060f/rootfs","created":"2022-01-27T03:16:20.507418647Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e2d4d42107b82a0af627ea3a0ce53c71738ac7e8ed7d7d1673089a14919d89f0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"62b37530a83efcdbc97267529625fe4196f4fd8226687e4695f9b0299eea2846","pid":2079,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/62b37530a83efcdbc97267529625fe4196f4fd8226687e4695f9b0299eea2846","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/62b37530a83efcdbc97267529625fe4196f4fd8226687e4695f9b0299eea2846/rootfs","created":"2022-01-27T03:16:43.410724588Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":
"1176ff589fabea8d6dd7fc71637bc92847a742ea004dce5caa8b71d13ef5f465"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6b73cef3f45943fa1d85f5d44fe24cee72c38afa4faf17a7f242ff7fdb367643","pid":980,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6b73cef3f45943fa1d85f5d44fe24cee72c38afa4faf17a7f242ff7fdb367643","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6b73cef3f45943fa1d85f5d44fe24cee72c38afa4faf17a7f242ff7fdb367643/rootfs","created":"2022-01-27T03:16:20.218224243Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"6b73cef3f45943fa1d85f5d44fe24cee72c38afa4faf17a7f242ff7fdb367643","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-pause-20220127031541-6703_7d17f37224d53544ee825b6ba1742b7b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"886e22ea0e68c7cc285e2aa75fe090255638e01b7e8d164c4aae08f322528822","pid":1933,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/886e2
2ea0e68c7cc285e2aa75fe090255638e01b7e8d164c4aae08f322528822","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/886e22ea0e68c7cc285e2aa75fe090255638e01b7e8d164c4aae08f322528822/rootfs","created":"2022-01-27T03:16:41.51152493Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"2fdea377e1da5d385ae80e959ce719eed69072342752f692d5210ddf1bcfb777"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8edeaaa7767d3fe8bc23be5c463467acae9a1994931a4ee7a2b729451026e4af","pid":1033,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8edeaaa7767d3fe8bc23be5c463467acae9a1994931a4ee7a2b729451026e4af","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8edeaaa7767d3fe8bc23be5c463467acae9a1994931a4ee7a2b729451026e4af/rootfs","created":"2022-01-27T03:16:20.216006155Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"8edeaaa7767d3fe8bc23be5c463467a
cae9a1994931a4ee7a2b729451026e4af","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-pause-20220127031541-6703_6b94ebcc2e7766018aaf230d0e52b9e7"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"92ca6214e1c128aa059fb6af146a333389df9ae21b2adb753aa3d2c42c5f5613","pid":1051,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/92ca6214e1c128aa059fb6af146a333389df9ae21b2adb753aa3d2c42c5f5613","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/92ca6214e1c128aa059fb6af146a333389df9ae21b2adb753aa3d2c42c5f5613/rootfs","created":"2022-01-27T03:16:20.239650389Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"92ca6214e1c128aa059fb6af146a333389df9ae21b2adb753aa3d2c42c5f5613","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-pause-20220127031541-6703_cf703987ccbca29b3f499b9bc24e460b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a009159daa09e5f390fb09e4e8f88530528b0c863
0aa629642bac57e67ebd9c6","pid":1178,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a009159daa09e5f390fb09e4e8f88530528b0c8630aa629642bac57e67ebd9c6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a009159daa09e5f390fb09e4e8f88530528b0c8630aa629642bac57e67ebd9c6/rootfs","created":"2022-01-27T03:16:20.450881947Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"92ca6214e1c128aa059fb6af146a333389df9ae21b2adb753aa3d2c42c5f5613"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a2679d06f09b6aea2e735d247ca7aed40a6ef702e4ddde33c1370a730dae5d67","pid":1739,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a2679d06f09b6aea2e735d247ca7aed40a6ef702e4ddde33c1370a730dae5d67","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a2679d06f09b6aea2e735d247ca7aed40a6ef702e4ddde33c1370a730dae5d67/rootfs","created":"2022-01-27T03:16:40.97976883
7Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a2679d06f09b6aea2e735d247ca7aed40a6ef702e4ddde33c1370a730dae5d67","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-2bzj7_ceb4d44f-4872-4268-89b4-adb4c55e0102"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e2d4d42107b82a0af627ea3a0ce53c71738ac7e8ed7d7d1673089a14919d89f0","pid":944,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e2d4d42107b82a0af627ea3a0ce53c71738ac7e8ed7d7d1673089a14919d89f0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e2d4d42107b82a0af627ea3a0ce53c71738ac7e8ed7d7d1673089a14919d89f0/rootfs","created":"2022-01-27T03:16:20.215772663Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e2d4d42107b82a0af627ea3a0ce53c71738ac7e8ed7d7d1673089a14919d89f0","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-pause-20220127031541-6703_686f7b6ed
893161c15f363ac43c1128c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"eb36e8411c4da7b98212be67c7eb03b5d2015c1495376038fd0a1b4b2bbb389f","pid":1133,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/eb36e8411c4da7b98212be67c7eb03b5d2015c1495376038fd0a1b4b2bbb389f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/eb36e8411c4da7b98212be67c7eb03b5d2015c1495376038fd0a1b4b2bbb389f/rootfs","created":"2022-01-27T03:16:20.435380134Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"8edeaaa7767d3fe8bc23be5c463467acae9a1994931a4ee7a2b729451026e4af"},"owner":"root"}]
	I0127 03:17:00.109414  163929 cri.go:124] list returned 14 containers
	I0127 03:17:00.109436  163929 cri.go:127] container: {ID:0a04e5a0eca444f6a23750b80b5b84302571637fe91ecd03a04b5a511e35c8f7 Status:running}
	I0127 03:17:00.109453  163929 cri.go:133] skipping {0a04e5a0eca444f6a23750b80b5b84302571637fe91ecd03a04b5a511e35c8f7 running}: state = "running", want "paused"
	I0127 03:17:00.109469  163929 cri.go:127] container: {ID:1176ff589fabea8d6dd7fc71637bc92847a742ea004dce5caa8b71d13ef5f465 Status:running}
	I0127 03:17:00.109475  163929 cri.go:129] skipping 1176ff589fabea8d6dd7fc71637bc92847a742ea004dce5caa8b71d13ef5f465 - not in ps
	I0127 03:17:00.109484  163929 cri.go:127] container: {ID:1a2313a515d37a8bf98dff383afc9ca3e24f14cc675df33d8bdff8ddc9db275f Status:running}
	I0127 03:17:00.109489  163929 cri.go:133] skipping {1a2313a515d37a8bf98dff383afc9ca3e24f14cc675df33d8bdff8ddc9db275f running}: state = "running", want "paused"
	I0127 03:17:00.109497  163929 cri.go:127] container: {ID:2fdea377e1da5d385ae80e959ce719eed69072342752f692d5210ddf1bcfb777 Status:running}
	I0127 03:17:00.109503  163929 cri.go:129] skipping 2fdea377e1da5d385ae80e959ce719eed69072342752f692d5210ddf1bcfb777 - not in ps
	I0127 03:17:00.109513  163929 cri.go:127] container: {ID:48c3e32b20f6e9d49fb080085cee5a973df6945da3263ea7851445fd8ac6060f Status:running}
	I0127 03:17:00.109519  163929 cri.go:133] skipping {48c3e32b20f6e9d49fb080085cee5a973df6945da3263ea7851445fd8ac6060f running}: state = "running", want "paused"
	I0127 03:17:00.109525  163929 cri.go:127] container: {ID:62b37530a83efcdbc97267529625fe4196f4fd8226687e4695f9b0299eea2846 Status:running}
	I0127 03:17:00.109531  163929 cri.go:133] skipping {62b37530a83efcdbc97267529625fe4196f4fd8226687e4695f9b0299eea2846 running}: state = "running", want "paused"
	I0127 03:17:00.109536  163929 cri.go:127] container: {ID:6b73cef3f45943fa1d85f5d44fe24cee72c38afa4faf17a7f242ff7fdb367643 Status:running}
	I0127 03:17:00.109542  163929 cri.go:129] skipping 6b73cef3f45943fa1d85f5d44fe24cee72c38afa4faf17a7f242ff7fdb367643 - not in ps
	I0127 03:17:00.109546  163929 cri.go:127] container: {ID:886e22ea0e68c7cc285e2aa75fe090255638e01b7e8d164c4aae08f322528822 Status:running}
	I0127 03:17:00.109552  163929 cri.go:133] skipping {886e22ea0e68c7cc285e2aa75fe090255638e01b7e8d164c4aae08f322528822 running}: state = "running", want "paused"
	I0127 03:17:00.109557  163929 cri.go:127] container: {ID:8edeaaa7767d3fe8bc23be5c463467acae9a1994931a4ee7a2b729451026e4af Status:running}
	I0127 03:17:00.109563  163929 cri.go:129] skipping 8edeaaa7767d3fe8bc23be5c463467acae9a1994931a4ee7a2b729451026e4af - not in ps
	I0127 03:17:00.109567  163929 cri.go:127] container: {ID:92ca6214e1c128aa059fb6af146a333389df9ae21b2adb753aa3d2c42c5f5613 Status:running}
	I0127 03:17:00.109572  163929 cri.go:129] skipping 92ca6214e1c128aa059fb6af146a333389df9ae21b2adb753aa3d2c42c5f5613 - not in ps
	I0127 03:17:00.109575  163929 cri.go:127] container: {ID:a009159daa09e5f390fb09e4e8f88530528b0c8630aa629642bac57e67ebd9c6 Status:running}
	I0127 03:17:00.109579  163929 cri.go:133] skipping {a009159daa09e5f390fb09e4e8f88530528b0c8630aa629642bac57e67ebd9c6 running}: state = "running", want "paused"
	I0127 03:17:00.109582  163929 cri.go:127] container: {ID:a2679d06f09b6aea2e735d247ca7aed40a6ef702e4ddde33c1370a730dae5d67 Status:running}
	I0127 03:17:00.109587  163929 cri.go:129] skipping a2679d06f09b6aea2e735d247ca7aed40a6ef702e4ddde33c1370a730dae5d67 - not in ps
	I0127 03:17:00.109592  163929 cri.go:127] container: {ID:e2d4d42107b82a0af627ea3a0ce53c71738ac7e8ed7d7d1673089a14919d89f0 Status:running}
	I0127 03:17:00.109598  163929 cri.go:129] skipping e2d4d42107b82a0af627ea3a0ce53c71738ac7e8ed7d7d1673089a14919d89f0 - not in ps
	I0127 03:17:00.109602  163929 cri.go:127] container: {ID:eb36e8411c4da7b98212be67c7eb03b5d2015c1495376038fd0a1b4b2bbb389f Status:running}
	I0127 03:17:00.109610  163929 cri.go:133] skipping {eb36e8411c4da7b98212be67c7eb03b5d2015c1495376038fd0a1b4b2bbb389f running}: state = "running", want "paused"
	I0127 03:17:00.109652  163929 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 03:17:00.118823  163929 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 03:17:00.126080  163929 kubeadm.go:124] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 03:17:00.127173  163929 kubeconfig.go:92] found "pause-20220127031541-6703" server: "https://192.168.67.2:8443"
	I0127 03:17:00.128285  163929 kapi.go:59] client config for pause-20220127031541-6703: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/pause-20220127031541-6703/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/pause-20220127031541-6703
/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x15da7e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0127 03:17:00.130207  163929 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 03:17:00.139543  163929 api_server.go:165] Checking apiserver status ...
	I0127 03:17:00.139593  163929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:17:00.160949  163929 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1133/cgroup
	I0127 03:17:00.169913  163929 api_server.go:181] apiserver freezer: "8:freezer:/docker/e76adc378ad0c84d6d40b961bf5e80f9896e30608935373326ce9025e8a4ab01/kubepods/burstable/pod6b94ebcc2e7766018aaf230d0e52b9e7/eb36e8411c4da7b98212be67c7eb03b5d2015c1495376038fd0a1b4b2bbb389f"
	I0127 03:17:00.169978  163929 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e76adc378ad0c84d6d40b961bf5e80f9896e30608935373326ce9025e8a4ab01/kubepods/burstable/pod6b94ebcc2e7766018aaf230d0e52b9e7/eb36e8411c4da7b98212be67c7eb03b5d2015c1495376038fd0a1b4b2bbb389f/freezer.state
	I0127 03:17:00.176622  163929 api_server.go:203] freezer state: "THAWED"
	I0127 03:17:00.176654  163929 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0127 03:17:00.181550  163929 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0127 03:17:00.199322  163929 system_pods.go:86] 7 kube-system pods found
	I0127 03:17:00.199353  163929 system_pods.go:89] "coredns-64897985d-p2l5j" [89d03314-65b0-43ef-85a5-898223c9a84b] Running
	I0127 03:17:00.199362  163929 system_pods.go:89] "etcd-pause-20220127031541-6703" [44237bcb-32c0-47b7-959b-d600f9c50922] Running
	I0127 03:17:00.199370  163929 system_pods.go:89] "kindnet-pkggr" [767e367d-723c-45b9-bfbb-0cac37e69288] Running
	I0127 03:17:00.199377  163929 system_pods.go:89] "kube-apiserver-pause-20220127031541-6703" [38acbadd-7b65-4bf9-b495-0fa85acf147c] Running
	I0127 03:17:00.199388  163929 system_pods.go:89] "kube-controller-manager-pause-20220127031541-6703" [08c12ca6-8b0e-4439-9a2d-e804a2950199] Running
	I0127 03:17:00.199397  163929 system_pods.go:89] "kube-proxy-2bzj7" [ceb4d44f-4872-4268-89b4-adb4c55e0102] Running
	I0127 03:17:00.199403  163929 system_pods.go:89] "kube-scheduler-pause-20220127031541-6703" [20d1efc5-aaf3-4c4a-9e73-6ddad3b56191] Running
	I0127 03:17:00.201052  163929 api_server.go:140] control plane version: v1.23.2
	I0127 03:17:00.201079  163929 kubeadm.go:618] The running cluster does not require reconfiguration: 192.168.67.2
	I0127 03:17:00.201087  163929 kubeadm.go:390] StartCluster complete in 155.792656ms
	I0127 03:17:00.201105  163929 settings.go:142] acquiring lock: {Name:mkfac99b88cf5519bc3b0da9d34ba6bc12584830 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:17:00.201197  163929 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
	I0127 03:17:00.201935  163929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig: {Name:mk52def711e0760588c8e7c9e046110fe006e484 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:17:00.202698  163929 kapi.go:59] client config for pause-20220127031541-6703: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/pause-20220127031541-6703/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/pause-20220127031541-6703
/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x15da7e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0127 03:17:00.207898  163929 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20220127031541-6703" rescaled to 1
	I0127 03:17:00.207957  163929 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 03:16:56.666432  165385 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0127 03:16:56.666735  165385 start.go:160] libmachine.API.Create for "cert-options-20220127031655-6703" (driver="docker")
	I0127 03:16:56.666763  165385 client.go:168] LocalClient.Create starting
	I0127 03:16:56.666846  165385 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem
	I0127 03:16:56.666877  165385 main.go:130] libmachine: Decoding PEM data...
	I0127 03:16:56.666889  165385 main.go:130] libmachine: Parsing certificate...
	I0127 03:16:56.666950  165385 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/cert.pem
	I0127 03:16:56.666965  165385 main.go:130] libmachine: Decoding PEM data...
	I0127 03:16:56.666972  165385 main.go:130] libmachine: Parsing certificate...
	I0127 03:16:56.667320  165385 cli_runner.go:133] Run: docker network inspect cert-options-20220127031655-6703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0127 03:16:56.698737  165385 cli_runner.go:180] docker network inspect cert-options-20220127031655-6703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0127 03:16:56.698793  165385 network_create.go:254] running [docker network inspect cert-options-20220127031655-6703] to gather additional debugging logs...
	I0127 03:16:56.698809  165385 cli_runner.go:133] Run: docker network inspect cert-options-20220127031655-6703
	W0127 03:16:56.737360  165385 cli_runner.go:180] docker network inspect cert-options-20220127031655-6703 returned with exit code 1
	I0127 03:16:56.737384  165385 network_create.go:257] error running [docker network inspect cert-options-20220127031655-6703]: docker network inspect cert-options-20220127031655-6703: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cert-options-20220127031655-6703
	I0127 03:16:56.737406  165385 network_create.go:259] output of [docker network inspect cert-options-20220127031655-6703]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cert-options-20220127031655-6703
	
	** /stderr **
	I0127 03:16:56.737460  165385 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0127 03:16:56.774995  165385 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000010df0] misses:0}
	I0127 03:16:56.775031  165385 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0127 03:16:56.775047  165385 network_create.go:106] attempt to create docker network cert-options-20220127031655-6703 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0127 03:16:56.775085  165385 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220127031655-6703
	I0127 03:16:56.957687  165385 network_create.go:90] docker network cert-options-20220127031655-6703 192.168.49.0/24 created
	I0127 03:16:56.957713  165385 kic.go:106] calculated static IP "192.168.49.2" for the "cert-options-20220127031655-6703" container
	I0127 03:16:56.957774  165385 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0127 03:16:56.992057  165385 cli_runner.go:133] Run: docker volume create cert-options-20220127031655-6703 --label name.minikube.sigs.k8s.io=cert-options-20220127031655-6703 --label created_by.minikube.sigs.k8s.io=true
	I0127 03:16:57.032349  165385 oci.go:102] Successfully created a docker volume cert-options-20220127031655-6703
	I0127 03:16:57.032419  165385 cli_runner.go:133] Run: docker run --rm --name cert-options-20220127031655-6703-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-20220127031655-6703 --entrypoint /usr/bin/test -v cert-options-20220127031655-6703:/var gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -d /var/lib
	I0127 03:17:00.257241  163929 out.go:176] * Verifying Kubernetes components...
	I0127 03:17:00.257333  163929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 03:17:00.208163  163929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 03:17:00.208193  163929 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0127 03:17:00.208339  163929 config.go:176] Loaded profile config "pause-20220127031541-6703": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.2
	I0127 03:17:00.257504  163929 addons.go:65] Setting storage-provisioner=true in profile "pause-20220127031541-6703"
	I0127 03:17:00.257527  163929 addons.go:153] Setting addon storage-provisioner=true in "pause-20220127031541-6703"
	W0127 03:17:00.257535  163929 addons.go:165] addon storage-provisioner should already be in state true
	I0127 03:17:00.257540  163929 addons.go:65] Setting default-storageclass=true in profile "pause-20220127031541-6703"
	I0127 03:17:00.257561  163929 host.go:66] Checking if "pause-20220127031541-6703" exists ...
	I0127 03:17:00.257562  163929 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20220127031541-6703"
	I0127 03:17:00.257902  163929 cli_runner.go:133] Run: docker container inspect pause-20220127031541-6703 --format={{.State.Status}}
	I0127 03:17:00.258087  163929 cli_runner.go:133] Run: docker container inspect pause-20220127031541-6703 --format={{.State.Status}}
	I0127 03:17:00.320738  163929 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 03:17:00.320899  163929 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:17:00.320912  163929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 03:17:00.320965  163929 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220127031541-6703
	I0127 03:17:00.327928  163929 kapi.go:59] client config for pause-20220127031541-6703: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/pause-20220127031541-6703/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/pause-20220127031541-6703
/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x15da7e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0127 03:17:00.336634  163929 addons.go:153] Setting addon default-storageclass=true in "pause-20220127031541-6703"
	W0127 03:17:00.336659  163929 addons.go:165] addon default-storageclass should already be in state true
	I0127 03:17:00.336691  163929 host.go:66] Checking if "pause-20220127031541-6703" exists ...
	I0127 03:17:00.337224  163929 cli_runner.go:133] Run: docker container inspect pause-20220127031541-6703 --format={{.State.Status}}
	I0127 03:17:00.347472  163929 start.go:757] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0127 03:17:00.347539  163929 node_ready.go:35] waiting up to 6m0s for node "pause-20220127031541-6703" to be "Ready" ...
	I0127 03:17:00.354716  163929 node_ready.go:49] node "pause-20220127031541-6703" has status "Ready":"True"
	I0127 03:17:00.354738  163929 node_ready.go:38] duration metric: took 7.178279ms waiting for node "pause-20220127031541-6703" to be "Ready" ...
	I0127 03:17:00.354748  163929 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:17:00.361251  163929 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-p2l5j" in "kube-system" namespace to be "Ready" ...
	I0127 03:17:00.381991  163929 pod_ready.go:92] pod "coredns-64897985d-p2l5j" in "kube-system" namespace has status "Ready":"True"
	I0127 03:17:00.382013  163929 pod_ready.go:81] duration metric: took 20.7292ms waiting for pod "coredns-64897985d-p2l5j" in "kube-system" namespace to be "Ready" ...
	I0127 03:17:00.382026  163929 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20220127031541-6703" in "kube-system" namespace to be "Ready" ...
	I0127 03:17:00.391564  163929 pod_ready.go:92] pod "etcd-pause-20220127031541-6703" in "kube-system" namespace has status "Ready":"True"
	I0127 03:17:00.391584  163929 pod_ready.go:81] duration metric: took 9.55048ms waiting for pod "etcd-pause-20220127031541-6703" in "kube-system" namespace to be "Ready" ...
	I0127 03:17:00.391602  163929 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20220127031541-6703" in "kube-system" namespace to be "Ready" ...
	I0127 03:17:00.397028  163929 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 03:17:00.397048  163929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 03:17:00.397101  163929 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220127031541-6703
	I0127 03:17:00.398243  163929 pod_ready.go:92] pod "kube-apiserver-pause-20220127031541-6703" in "kube-system" namespace has status "Ready":"True"
	I0127 03:17:00.398269  163929 pod_ready.go:81] duration metric: took 6.658839ms waiting for pod "kube-apiserver-pause-20220127031541-6703" in "kube-system" namespace to be "Ready" ...
	I0127 03:17:00.398281  163929 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20220127031541-6703" in "kube-system" namespace to be "Ready" ...
	I0127 03:17:00.401911  163929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49338 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/pause-20220127031541-6703/id_rsa Username:docker}
	I0127 03:17:00.402986  163929 pod_ready.go:92] pod "kube-controller-manager-pause-20220127031541-6703" in "kube-system" namespace has status "Ready":"True"
	I0127 03:17:00.403013  163929 pod_ready.go:81] duration metric: took 4.721924ms waiting for pod "kube-controller-manager-pause-20220127031541-6703" in "kube-system" namespace to be "Ready" ...
	I0127 03:17:00.403026  163929 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2bzj7" in "kube-system" namespace to be "Ready" ...
	I0127 03:17:00.457799  163929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49338 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/pause-20220127031541-6703/id_rsa Username:docker}
	I0127 03:17:00.532049  163929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 03:17:00.623174  163929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 03:17:00.785143  163929 pod_ready.go:92] pod "kube-proxy-2bzj7" in "kube-system" namespace has status "Ready":"True"
	I0127 03:17:00.785205  163929 pod_ready.go:81] duration metric: took 382.170785ms waiting for pod "kube-proxy-2bzj7" in "kube-system" namespace to be "Ready" ...
	I0127 03:17:00.785224  163929 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20220127031541-6703" in "kube-system" namespace to be "Ready" ...
	I0127 03:17:00.906772  163929 out.go:176] * Enabled addons: default-storageclass, storage-provisioner
	I0127 03:17:00.906806  163929 addons.go:417] enableAddons completed in 698.622737ms
	I0127 03:17:01.184657  163929 pod_ready.go:92] pod "kube-scheduler-pause-20220127031541-6703" in "kube-system" namespace has status "Ready":"True"
	I0127 03:17:01.184683  163929 pod_ready.go:81] duration metric: took 399.447468ms waiting for pod "kube-scheduler-pause-20220127031541-6703" in "kube-system" namespace to be "Ready" ...
	I0127 03:17:01.184691  163929 pod_ready.go:38] duration metric: took 829.929253ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 03:17:01.184708  163929 api_server.go:51] waiting for apiserver process to appear ...
	I0127 03:17:01.184740  163929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 03:17:01.209882  163929 api_server.go:71] duration metric: took 1.001891531s to wait for apiserver process to appear ...
	I0127 03:17:01.209974  163929 api_server.go:87] waiting for apiserver healthz status ...
	I0127 03:17:01.210002  163929 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0127 03:17:01.220672  163929 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0127 03:17:01.221639  163929 api_server.go:140] control plane version: v1.23.2
	I0127 03:17:01.221661  163929 api_server.go:130] duration metric: took 11.668115ms to wait for apiserver health ...
	I0127 03:17:01.221670  163929 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 03:17:01.387956  163929 system_pods.go:59] 8 kube-system pods found
	I0127 03:17:01.387993  163929 system_pods.go:61] "coredns-64897985d-p2l5j" [89d03314-65b0-43ef-85a5-898223c9a84b] Running
	I0127 03:17:01.388002  163929 system_pods.go:61] "etcd-pause-20220127031541-6703" [44237bcb-32c0-47b7-959b-d600f9c50922] Running
	I0127 03:17:01.388008  163929 system_pods.go:61] "kindnet-pkggr" [767e367d-723c-45b9-bfbb-0cac37e69288] Running
	I0127 03:17:01.388014  163929 system_pods.go:61] "kube-apiserver-pause-20220127031541-6703" [38acbadd-7b65-4bf9-b495-0fa85acf147c] Running
	I0127 03:17:01.388021  163929 system_pods.go:61] "kube-controller-manager-pause-20220127031541-6703" [08c12ca6-8b0e-4439-9a2d-e804a2950199] Running
	I0127 03:17:01.388028  163929 system_pods.go:61] "kube-proxy-2bzj7" [ceb4d44f-4872-4268-89b4-adb4c55e0102] Running
	I0127 03:17:01.388034  163929 system_pods.go:61] "kube-scheduler-pause-20220127031541-6703" [20d1efc5-aaf3-4c4a-9e73-6ddad3b56191] Running
	I0127 03:17:01.388045  163929 system_pods.go:61] "storage-provisioner" [b5e19d29-d637-4733-bc02-57d96df8234e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 03:17:01.388061  163929 system_pods.go:74] duration metric: took 166.374613ms to wait for pod list to return data ...
	I0127 03:17:01.388069  163929 default_sa.go:34] waiting for default service account to be created ...
	I0127 03:17:01.585705  163929 default_sa.go:45] found service account: "default"
	I0127 03:17:01.585732  163929 default_sa.go:55] duration metric: took 197.656465ms for default service account to be created ...
	I0127 03:17:01.585741  163929 system_pods.go:116] waiting for k8s-apps to be running ...
	I0127 03:17:01.787369  163929 system_pods.go:86] 8 kube-system pods found
	I0127 03:17:01.787397  163929 system_pods.go:89] "coredns-64897985d-p2l5j" [89d03314-65b0-43ef-85a5-898223c9a84b] Running
	I0127 03:17:01.787403  163929 system_pods.go:89] "etcd-pause-20220127031541-6703" [44237bcb-32c0-47b7-959b-d600f9c50922] Running
	I0127 03:17:01.787407  163929 system_pods.go:89] "kindnet-pkggr" [767e367d-723c-45b9-bfbb-0cac37e69288] Running
	I0127 03:17:01.787411  163929 system_pods.go:89] "kube-apiserver-pause-20220127031541-6703" [38acbadd-7b65-4bf9-b495-0fa85acf147c] Running
	I0127 03:17:01.787416  163929 system_pods.go:89] "kube-controller-manager-pause-20220127031541-6703" [08c12ca6-8b0e-4439-9a2d-e804a2950199] Running
	I0127 03:17:01.787421  163929 system_pods.go:89] "kube-proxy-2bzj7" [ceb4d44f-4872-4268-89b4-adb4c55e0102] Running
	I0127 03:17:01.787428  163929 system_pods.go:89] "kube-scheduler-pause-20220127031541-6703" [20d1efc5-aaf3-4c4a-9e73-6ddad3b56191] Running
	I0127 03:17:01.787439  163929 system_pods.go:89] "storage-provisioner" [b5e19d29-d637-4733-bc02-57d96df8234e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 03:17:01.787461  163929 system_pods.go:126] duration metric: took 201.716625ms to wait for k8s-apps to be running ...
	I0127 03:17:01.787468  163929 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 03:17:01.787506  163929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 03:17:01.798574  163929 system_svc.go:56] duration metric: took 11.096908ms WaitForService to wait for kubelet.
	I0127 03:17:01.798602  163929 kubeadm.go:542] duration metric: took 1.590617981s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0127 03:17:01.798626  163929 node_conditions.go:102] verifying NodePressure condition ...
	I0127 03:17:01.986365  163929 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0127 03:17:01.986391  163929 node_conditions.go:123] node cpu capacity is 8
	I0127 03:17:01.986403  163929 node_conditions.go:105] duration metric: took 187.772865ms to run NodePressure ...
	I0127 03:17:01.986412  163929 start.go:213] waiting for startup goroutines ...
	I0127 03:17:02.148770  163929 start.go:496] kubectl: 1.23.3, cluster: 1.23.2 (minor skew: 0)
	I0127 03:17:02.184464  163929 out.go:176] * Done! kubectl is now configured to use "pause-20220127031541-6703" cluster and "default" namespace by default
	I0127 03:17:00.950715  165385 cli_runner.go:186] Completed: docker run --rm --name cert-options-20220127031655-6703-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-20220127031655-6703 --entrypoint /usr/bin/test -v cert-options-20220127031655-6703:/var gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -d /var/lib: (3.918253388s)
	I0127 03:17:00.950737  165385 oci.go:106] Successfully prepared a docker volume cert-options-20220127031655-6703
	I0127 03:17:00.950779  165385 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime containerd
	I0127 03:17:00.950798  165385 kic.go:179] Starting extracting preloaded images to volume ...
	I0127 03:17:00.950849  165385 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cert-options-20220127031655-6703:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir
	I0127 03:17:03.570053  162215 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (11.223111505s)
	I0127 03:17:03.570082  162215 containerd.go:562] Took 11.223225 seconds t extract the tarball
	I0127 03:17:03.570093  162215 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 03:17:03.660335  162215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 03:17:03.871641  162215 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 03:17:04.105253  162215 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 03:17:04.124625  162215 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.20.0 k8s.gcr.io/kube-controller-manager:v1.20.0 k8s.gcr.io/kube-scheduler:v1.20.0 k8s.gcr.io/kube-proxy:v1.20.0 k8s.gcr.io/pause:3.2 k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/kubernetesui/dashboard:v2.3.1 docker.io/kubernetesui/metrics-scraper:v1.0.7]
	I0127 03:17:04.124734  162215 image.go:134] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.7
	I0127 03:17:04.124936  162215 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.20.0
	I0127 03:17:04.125040  162215 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.20.0
	I0127 03:17:04.125150  162215 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.20.0
	I0127 03:17:04.125243  162215 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.20.0
	I0127 03:17:04.125465  162215 image.go:134] retrieving image: k8s.gcr.io/pause:3.2
	I0127 03:17:04.125633  162215 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.13-0
	I0127 03:17:04.125741  162215 image.go:134] retrieving image: k8s.gcr.io/coredns:1.7.0
	I0127 03:17:04.125832  162215 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 03:17:04.125921  162215 image.go:134] retrieving image: docker.io/kubernetesui/dashboard:v2.3.1
	I0127 03:17:04.127466  162215 image.go:180] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: reference does not exist
	I0127 03:17:04.127977  162215 image.go:180] daemon lookup for docker.io/kubernetesui/dashboard:v2.3.1: Error response from daemon: reference does not exist
	I0127 03:17:04.128109  162215 image.go:180] daemon lookup for docker.io/kubernetesui/metrics-scraper:v1.0.7: Error response from daemon: reference does not exist
	I0127 03:17:04.128136  162215 image.go:180] daemon lookup for k8s.gcr.io/etcd:3.4.13-0: Error response from daemon: reference does not exist
	I0127 03:17:04.128278  162215 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.20.0: Error response from daemon: reference does not exist
	I0127 03:17:04.128413  162215 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.20.0: Error response from daemon: reference does not exist
	I0127 03:17:04.128440  162215 image.go:180] daemon lookup for k8s.gcr.io/pause:3.2: Error response from daemon: reference does not exist
	I0127 03:17:04.128545  162215 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.20.0: Error response from daemon: reference does not exist
	I0127 03:17:04.128565  162215 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.20.0: Error response from daemon: reference does not exist
	I0127 03:17:04.128669  162215 image.go:180] daemon lookup for k8s.gcr.io/coredns:1.7.0: Error response from daemon: reference does not exist
	I0127 03:17:04.418344  162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.20.0"
	I0127 03:17:04.419139  162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.20.0"
	I0127 03:17:04.424426  162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.4.13-0"
	I0127 03:17:04.427553  162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns:1.7.0"
	I0127 03:17:04.443893  162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.20.0"
	I0127 03:17:04.444493  162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.20.0"
	I0127 03:17:04.465910  162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I0127 03:17:04.509961  162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.2"
	I0127 03:17:05.015592  162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/metrics-scraper:v1.0.7"
	I0127 03:17:05.022587  162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/dashboard:v2.3.1"
	I0127 03:17:05.309789  162215 cache_images.go:123] Successfully loaded all cached images
	I0127 03:17:05.309814  162215 cache_images.go:92] LoadImages completed in 1.185159822s
	I0127 03:17:05.309874  162215 ssh_runner.go:195] Run: sudo crictl info
	I0127 03:17:05.330052  162215 cni.go:93] Creating CNI manager for ""
	I0127 03:17:05.330072  162215 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0127 03:17:05.330083  162215 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0127 03:17:05.330095  162215 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.59.48 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-20220127031538-6703 NodeName:running-upgrade-20220127031538-6703 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.59.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.59.48 CgroupDriver:cgrou
pfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0127 03:17:05.330263  162215 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.59.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "running-upgrade-20220127031538-6703"
	  kubeletExtraArgs:
	    node-ip: 192.168.59.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.59.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 03:17:05.330371  162215 kubeadm.go:791] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=running-upgrade-20220127031538-6703 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.59.48 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:running-upgrade-20220127031538-6703 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0127 03:17:05.330429  162215 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0127 03:17:05.339921  162215 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 03:17:05.339997  162215 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 03:17:05.347885  162215 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (581 bytes)
	I0127 03:17:05.363235  162215 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 03:17:05.408320  162215 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2088 bytes)
	I0127 03:17:05.432119  162215 ssh_runner.go:195] Run: grep 192.168.59.48	control-plane.minikube.internal$ /etc/hosts
	I0127 03:17:05.436673  162215 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703 for IP: 192.168.59.48
	I0127 03:17:05.436796  162215 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.key
	I0127 03:17:05.436850  162215 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/proxy-client-ca.key
	I0127 03:17:05.436973  162215 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703/client.key
	I0127 03:17:05.437053  162215 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703/apiserver.key.fc40ab25
	I0127 03:17:05.437109  162215 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703/proxy-client.key
	I0127 03:17:05.437225  162215 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/6703.pem (1338 bytes)
	W0127 03:17:05.437268  162215 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/6703_empty.pem, impossibly tiny 0 bytes
	I0127 03:17:05.437284  162215 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 03:17:05.437316  162215 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem (1078 bytes)
	I0127 03:17:05.437342  162215 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/cert.pem (1123 bytes)
	I0127 03:17:05.437364  162215 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/key.pem (1675 bytes)
	I0127 03:17:05.437417  162215 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/67032.pem (1708 bytes)
	I0127 03:17:05.438513  162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0127 03:17:05.461440  162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 03:17:05.516402  162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 03:17:05.537977  162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 03:17:05.561369  162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 03:17:05.629987  162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 03:17:05.657572  162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 03:17:05.724754  162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 03:17:05.808160  162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/67032.pem --> /usr/share/ca-certificates/67032.pem (1708 bytes)
	I0127 03:17:05.914815  162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 03:17:05.937255  162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/6703.pem --> /usr/share/ca-certificates/6703.pem (1338 bytes)
	I0127 03:17:05.960381  162215 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 03:17:06.036900  162215 ssh_runner.go:195] Run: openssl version
	I0127 03:17:06.042924  162215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67032.pem && ln -fs /usr/share/ca-certificates/67032.pem /etc/ssl/certs/67032.pem"
	I0127 03:17:06.064802  162215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67032.pem
	I0127 03:17:06.068353  162215 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 27 02:47 /usr/share/ca-certificates/67032.pem
	I0127 03:17:06.068400  162215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67032.pem
	I0127 03:17:06.074066  162215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67032.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 03:17:06.104715  162215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 03:17:06.112573  162215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:17:06.115981  162215 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 27 02:42 /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:17:06.116036  162215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 03:17:06.120843  162215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 03:17:06.127821  162215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6703.pem && ln -fs /usr/share/ca-certificates/6703.pem /etc/ssl/certs/6703.pem"
	I0127 03:17:06.136027  162215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6703.pem
	I0127 03:17:06.139199  162215 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 27 02:47 /usr/share/ca-certificates/6703.pem
	I0127 03:17:06.139245  162215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6703.pem
	I0127 03:17:06.144267  162215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6703.pem /etc/ssl/certs/51391683.0"
	I0127 03:17:06.151276  162215 kubeadm.go:388] StartCluster: {Name:running-upgrade-20220127031538-6703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:running-upgrade-20220127031538-6703 Namespace:default APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.48 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false}
	I0127 03:17:06.151365  162215 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 03:17:06.151395  162215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 03:17:06.167608  162215 cri.go:87] found id: ""
	I0127 03:17:06.167655  162215 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 03:17:06.207586  162215 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 03:17:06.215687  162215 kubeadm.go:124] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 03:17:06.216403  162215 kubeconfig.go:116] verify returned: extract IP: "running-upgrade-20220127031538-6703" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
	I0127 03:17:06.216619  162215 kubeconfig.go:127] "running-upgrade-20220127031538-6703" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig - will repair!
	I0127 03:17:06.217195  162215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig: {Name:mk52def711e0760588c8e7c9e046110fe006e484 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 03:17:06.241403  162215 kapi.go:59] client config for running-upgrade-20220127031538-6703: &rest.Config{Host:"https://192.168.59.48:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/runn
ing-upgrade-20220127031538-6703/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x15da7e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0127 03:17:06.243260  162215 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 03:17:06.251876  162215 kubeadm.go:593] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2022-01-27 03:16:10.898540450 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2022-01-27 03:17:05.423671678 +0000
	@@ -65,4 +65,10 @@
	 apiVersion: kubeproxy.config.k8s.io/v1alpha1
	 kind: KubeProxyConfiguration
	 clusterCIDR: "10.244.0.0/16"
	-metricsBindAddress: 192.168.59.48:10249
	+metricsBindAddress: 0.0.0.0:10249
	+conntrack:
	+  maxPerCore: 0
	+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	+  tcpEstablishedTimeout: 0s
	+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	+  tcpCloseWaitTimeout: 0s
	
	-- /stdout --
	I0127 03:17:06.251922  162215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 03:17:06.932139  162215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 03:17:06.943350  162215 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 03:17:06.959832  162215 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I0127 03:17:06.959882  162215 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 03:17:06.968121  162215 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 03:17:06.968171  162215 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	W0127 03:17:07.446011  162215 out.go:241] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.11.0-1028-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.11.0-1028-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-8443]: Port 8443 is in use
		[ERROR Port-10259]: Port 10259 is in use
		[ERROR Port-10257]: Port 10257 is in use
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 03:17:07.446055  162215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0127 03:17:07.512319  162215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 03:17:07.522160  162215 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I0127 03:17:07.522213  162215 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 03:17:07.529324  162215 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 03:17:07.529370  162215 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0127 03:17:07.690870  162215 kubeadm.go:390] StartCluster complete in 1.539598545s
	I0127 03:17:07.690939  162215 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0127 03:17:07.690986  162215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 03:17:07.704611  162215 cri.go:87] found id: ""
	I0127 03:17:07.704637  162215 logs.go:274] 0 containers: []
	W0127 03:17:07.704645  162215 logs.go:276] No container was found matching "kube-apiserver"
	I0127 03:17:07.704668  162215 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0127 03:17:07.704745  162215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 03:17:07.717933  162215 cri.go:87] found id: ""
	I0127 03:17:07.717961  162215 logs.go:274] 0 containers: []
	W0127 03:17:07.717971  162215 logs.go:276] No container was found matching "etcd"
	I0127 03:17:07.717979  162215 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0127 03:17:07.718026  162215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 03:17:07.731059  162215 cri.go:87] found id: ""
	I0127 03:17:07.731079  162215 logs.go:274] 0 containers: []
	W0127 03:17:07.731085  162215 logs.go:276] No container was found matching "coredns"
	I0127 03:17:07.731090  162215 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0127 03:17:07.731152  162215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 03:17:07.745381  162215 cri.go:87] found id: ""
	I0127 03:17:07.745402  162215 logs.go:274] 0 containers: []
	W0127 03:17:07.745408  162215 logs.go:276] No container was found matching "kube-scheduler"
	I0127 03:17:07.745417  162215 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0127 03:17:07.745455  162215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 03:17:07.762094  162215 cri.go:87] found id: ""
	I0127 03:17:07.762125  162215 logs.go:274] 0 containers: []
	W0127 03:17:07.762133  162215 logs.go:276] No container was found matching "kube-proxy"
	I0127 03:17:07.762142  162215 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 03:17:07.762183  162215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 03:17:07.775553  162215 cri.go:87] found id: ""
	I0127 03:17:07.775580  162215 logs.go:274] 0 containers: []
	W0127 03:17:07.775586  162215 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0127 03:17:07.775591  162215 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0127 03:17:07.775638  162215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 03:17:07.789738  162215 cri.go:87] found id: ""
	I0127 03:17:07.789766  162215 logs.go:274] 0 containers: []
	W0127 03:17:07.789774  162215 logs.go:276] No container was found matching "storage-provisioner"
	I0127 03:17:07.789782  162215 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 03:17:07.789830  162215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 03:17:07.803044  162215 cri.go:87] found id: ""
	I0127 03:17:07.803071  162215 logs.go:274] 0 containers: []
	W0127 03:17:07.803078  162215 logs.go:276] No container was found matching "kube-controller-manager"
	I0127 03:17:07.803086  162215 logs.go:123] Gathering logs for kubelet ...
	I0127 03:17:07.803117  162215 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 03:17:07.895271  162215 logs.go:123] Gathering logs for dmesg ...
	I0127 03:17:07.895305  162215 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 03:17:07.915018  162215 logs.go:123] Gathering logs for describe nodes ...
	I0127 03:17:07.915058  162215 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 03:17:08.197863  162215 logs.go:123] Gathering logs for containerd ...
	I0127 03:17:08.197892  162215 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0127 03:17:08.257172  162215 logs.go:123] Gathering logs for container status ...
	I0127 03:17:08.257213  162215 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0127 03:17:08.275508  162215 out.go:370] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.11.0-1028-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.11.0-1028-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-8443]: Port 8443 is in use
		[ERROR Port-10259]: Port 10259 is in use
		[ERROR Port-10257]: Port 10257 is in use
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	W0127 03:17:08.275548  162215 out.go:241] * 
	W0127 03:17:08.275688  162215 out.go:241] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.11.0-1028-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.11.0-1028-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-8443]: Port 8443 is in use
		[ERROR Port-10259]: Port 10259 is in use
		[ERROR Port-10257]: Port 10257 is in use
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 03:17:08.275702  162215 out.go:241] * 
	W0127 03:17:08.276469  162215 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 03:17:08.396612  162215 out.go:176] 
	W0127 03:17:08.396806  162215 out.go:241] X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.11.0-1028-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.11.0-1028-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-8443]: Port 8443 is in use
		[ERROR Port-10259]: Port 10259 is in use
		[ERROR Port-10257]: Port 10257 is in use
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 03:17:08.396919  162215 out.go:241] * Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
	W0127 03:17:08.396990  162215 out.go:241] * Related issue: https://github.com/kubernetes/minikube/issues/5484
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID
	
	* 
	* ==> containerd <==
	* -- Logs begin at Thu 2022-01-27 03:15:48 UTC, end at Thu 2022-01-27 03:17:10 UTC. --
	Jan 27 03:17:04 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:04.145767423Z" level=info msg="Start streaming server"
	Jan 27 03:17:05 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:05.643542857Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-d9kxz,Uid:7573f936-998f-42ea-834e-ae5675f3e07d,Namespace:kube-system,Attempt:0,}"
	Jan 27 03:17:05 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:05.644142734Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kindnet-g9t9p,Uid:1488662f-9013-40e3-bbf5-e3fafc03bffc,Namespace:kube-system,Attempt:0,}"
	Jan 27 03:17:05 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:05.644297349Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-scheduler-running-upgrade-20220127031538-6703,Uid:3478da2c440ba32fb6c087b3f3b99813,Namespace:kube-system,Attempt:0,}"
	Jan 27 03:17:05 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:05.644388013Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-apiserver-running-upgrade-20220127031538-6703,Uid:8d4a75d38cddca902e7c95dda0b36b76,Namespace:kube-system,Attempt:0,}"
	Jan 27 03:17:05 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:05.644469219Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-controller-manager-running-upgrade-20220127031538-6703,Uid:a3e7be694ef7cf952503c5d331abc0ac,Namespace:kube-system,Attempt:0,}"
	Jan 27 03:17:05 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:05.644662192Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:etcd-running-upgrade-20220127031538-6703,Uid:047b1dabd2a0c8bbc03a956e423aeb4e,Namespace:kube-system,Attempt:0,}"
	Jan 27 03:17:05 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:05.952733661Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d9kxz,Uid:7573f936-998f-42ea-834e-ae5675f3e07d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = NotFound desc = failed to get sandbox image \"k8s.gcr.io/pause:3.2\": failed to pull image \"k8s.gcr.io/pause:3.2\": failed to pull and unpack image \"k8s.gcr.io/pause:3.2\": blob sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f expected at /var/lib/containerd/io.containerd.content.v1.content/blobs/sha256/927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f: not found"
	Jan 27 03:17:06 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:06.371307697Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-running-upgrade-20220127031538-6703,Uid:8d4a75d38cddca902e7c95dda0b36b76,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = NotFound desc = failed to get sandbox image \"k8s.gcr.io/pause:3.2\": failed to pull image \"k8s.gcr.io/pause:3.2\": failed to pull and unpack image \"k8s.gcr.io/pause:3.2\": blob sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f expected at /var/lib/containerd/io.containerd.content.v1.content/blobs/sha256/927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f: not found"
	Jan 27 03:17:06 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:06.422554742Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-running-upgrade-20220127031538-6703,Uid:a3e7be694ef7cf952503c5d331abc0ac,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = NotFound desc = failed to get sandbox image \"k8s.gcr.io/pause:3.2\": failed to pull image \"k8s.gcr.io/pause:3.2\": failed to pull and unpack image \"k8s.gcr.io/pause:3.2\": blob sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f expected at /var/lib/containerd/io.containerd.content.v1.content/blobs/sha256/927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f: not found"
	Jan 27 03:17:06 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:06.509206236Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-running-upgrade-20220127031538-6703,Uid:3478da2c440ba32fb6c087b3f3b99813,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = NotFound desc = failed to get sandbox image \"k8s.gcr.io/pause:3.2\": failed to pull image \"k8s.gcr.io/pause:3.2\": failed to pull and unpack image \"k8s.gcr.io/pause:3.2\": blob sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f expected at /var/lib/containerd/io.containerd.content.v1.content/blobs/sha256/927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f: not found"
	Jan 27 03:17:06 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:06.528476770Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-running-upgrade-20220127031538-6703,Uid:047b1dabd2a0c8bbc03a956e423aeb4e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = NotFound desc = failed to get sandbox image \"k8s.gcr.io/pause:3.2\": failed to pull image \"k8s.gcr.io/pause:3.2\": failed to pull and unpack image \"k8s.gcr.io/pause:3.2\": blob sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f expected at /var/lib/containerd/io.containerd.content.v1.content/blobs/sha256/927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f: not found"
	Jan 27 03:17:06 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:06.530614206Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-g9t9p,Uid:1488662f-9013-40e3-bbf5-e3fafc03bffc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = NotFound desc = failed to get sandbox image \"k8s.gcr.io/pause:3.2\": failed to pull image \"k8s.gcr.io/pause:3.2\": failed to pull and unpack image \"k8s.gcr.io/pause:3.2\": blob sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f expected at /var/lib/containerd/io.containerd.content.v1.content/blobs/sha256/927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f: not found"
	Jan 27 03:17:06 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:06.642365289Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-controller-manager-running-upgrade-20220127031538-6703,Uid:a3e7be694ef7cf952503c5d331abc0ac,Namespace:kube-system,Attempt:0,}"
	Jan 27 03:17:06 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:06.642422912Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-scheduler-running-upgrade-20220127031538-6703,Uid:3478da2c440ba32fb6c087b3f3b99813,Namespace:kube-system,Attempt:0,}"
	Jan 27 03:17:06 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:06.642369399Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-apiserver-running-upgrade-20220127031538-6703,Uid:8d4a75d38cddca902e7c95dda0b36b76,Namespace:kube-system,Attempt:0,}"
	Jan 27 03:17:06 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:06.642652848Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kindnet-g9t9p,Uid:1488662f-9013-40e3-bbf5-e3fafc03bffc,Namespace:kube-system,Attempt:0,}"
	Jan 27 03:17:06 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:06.642828011Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:etcd-running-upgrade-20220127031538-6703,Uid:047b1dabd2a0c8bbc03a956e423aeb4e,Namespace:kube-system,Attempt:0,}"
	Jan 27 03:17:06 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:06.642866758Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-d9kxz,Uid:7573f936-998f-42ea-834e-ae5675f3e07d,Namespace:kube-system,Attempt:0,}"
	Jan 27 03:17:06 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:06.960099940Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-g9t9p,Uid:1488662f-9013-40e3-bbf5-e3fafc03bffc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Canceled desc = failed to get sandbox image \"k8s.gcr.io/pause:3.2\": failed to pull image \"k8s.gcr.io/pause:3.2\": failed to pull and unpack image \"k8s.gcr.io/pause:3.2\": failed to resolve reference \"k8s.gcr.io/pause:3.2\": failed to do request: context canceled"
	Jan 27 03:17:06 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:06.969463097Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-running-upgrade-20220127031538-6703,Uid:3478da2c440ba32fb6c087b3f3b99813,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Canceled desc = failed to get sandbox image \"k8s.gcr.io/pause:3.2\": failed to pull image \"k8s.gcr.io/pause:3.2\": failed to pull and unpack image \"k8s.gcr.io/pause:3.2\": failed to resolve reference \"k8s.gcr.io/pause:3.2\": failed to do request: context canceled"
	Jan 27 03:17:06 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:06.992096286Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-running-upgrade-20220127031538-6703,Uid:8d4a75d38cddca902e7c95dda0b36b76,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Canceled desc = failed to get sandbox image \"k8s.gcr.io/pause:3.2\": failed to pull image \"k8s.gcr.io/pause:3.2\": failed to pull and unpack image \"k8s.gcr.io/pause:3.2\": failed to resolve reference \"k8s.gcr.io/pause:3.2\": failed to do request: context canceled"
	Jan 27 03:17:07 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:07.049775211Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-running-upgrade-20220127031538-6703,Uid:047b1dabd2a0c8bbc03a956e423aeb4e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Canceled desc = failed to get sandbox image \"k8s.gcr.io/pause:3.2\": failed to pull image \"k8s.gcr.io/pause:3.2\": failed to pull and unpack image \"k8s.gcr.io/pause:3.2\": failed to resolve reference \"k8s.gcr.io/pause:3.2\": failed to do request: context canceled"
	Jan 27 03:17:07 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:07.104427763Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d9kxz,Uid:7573f936-998f-42ea-834e-ae5675f3e07d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Canceled desc = failed to get sandbox image \"k8s.gcr.io/pause:3.2\": failed to pull image \"k8s.gcr.io/pause:3.2\": failed to pull and unpack image \"k8s.gcr.io/pause:3.2\": failed to resolve reference \"k8s.gcr.io/pause:3.2\": failed to do request: context canceled"
	Jan 27 03:17:07 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:07.145721347Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-running-upgrade-20220127031538-6703,Uid:a3e7be694ef7cf952503c5d331abc0ac,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = NotFound desc = failed to get sandbox image \"k8s.gcr.io/pause:3.2\": failed to pull image \"k8s.gcr.io/pause:3.2\": failed to pull and unpack image \"k8s.gcr.io/pause:3.2\": blob sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f expected at /var/lib/containerd/io.containerd.content.v1.content/blobs/sha256/927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f: not found"
	
	* 
	* ==> describe nodes <==
	* Name:               running-upgrade-20220127031538-6703
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=running-upgrade-20220127031538-6703
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9f1e482427589ff8451c4723b6ba53bb9742fbb1
	                    minikube.k8s.io/name=running-upgrade-20220127031538-6703
	                    minikube.k8s.io/updated_at=2022_01_27T03_16_34_0700
	                    minikube.k8s.io/version=v1.16.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 27 Jan 2022 03:16:26 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-20220127031538-6703
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 27 Jan 2022 03:17:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 27 Jan 2022 03:17:01 +0000   Thu, 27 Jan 2022 03:16:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 27 Jan 2022 03:17:01 +0000   Thu, 27 Jan 2022 03:16:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 27 Jan 2022 03:17:01 +0000   Thu, 27 Jan 2022 03:16:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 27 Jan 2022 03:17:01 +0000   Thu, 27 Jan 2022 03:16:41 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.59.48
	  Hostname:    running-upgrade-20220127031538-6703
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32879776Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32879776Ki
	  pods:               110
	System Info:
	  Machine ID:                 8f006d88ab0e4ddfa46e7b7e641ee4b5
	  System UUID:                290805a4-ff96-4709-8064-a94b26b5c979
	  Boot ID:                    2a5b9f9a-2bf2-4729-9d70-81647bd52771
	  Kernel Version:             5.11.0-1028-gcp
	  OS Image:                   Ubuntu 20.04.1 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.3
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-running-upgrade-20220127031538-6703                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         39s
	  kube-system                 kindnet-g9t9p                                                  100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      23s
	  kube-system                 kube-apiserver-running-upgrade-20220127031538-6703             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kube-system                 kube-controller-manager-running-upgrade-20220127031538-6703    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kube-system                 kube-proxy-d9kxz                                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kube-system                 kube-scheduler-running-upgrade-20220127031538-6703             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 50s                kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  50s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  49s (x7 over 50s)  kubelet     Node running-upgrade-20220127031538-6703 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    49s (x7 over 50s)  kubelet     Node running-upgrade-20220127031538-6703 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     49s (x6 over 50s)  kubelet     Node running-upgrade-20220127031538-6703 status is now: NodeHasSufficientPID
	  Normal  Starting                 39s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s                kubelet     Node running-upgrade-20220127031538-6703 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s                kubelet     Node running-upgrade-20220127031538-6703 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s                kubelet     Node running-upgrade-20220127031538-6703 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  39s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeNotReady             29s                kubelet     Node running-upgrade-20220127031538-6703 status is now: NodeNotReady
	  Normal  Starting                 22s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000259] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth77835499
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 6a 9f 15 52 14 12 08 06
	[Jan27 03:02] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth23474242
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 5f 5c 02 63 f5 08 06
	[  +0.952463] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth6a565172
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8e 93 08 55 00 8f 08 06
	[Jan27 03:05] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethf5d45a43
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 6a a7 5f 08 08 bf 08 06
	[  +0.972599] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev vethc4d22b01
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 2e de 43 25 aa 3b 08 06
	[Jan27 03:08] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth3295a1cb
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 7c 93 33 3b 31 08 06
	[Jan27 03:09] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth269009dd
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 96 cf 3c da 5a e9 08 06
	[Jan27 03:10] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethd4518a3b
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 02 92 1c df b7 d3 08 06
	[Jan27 03:13] process 'docker/tmp/qemu-check352080006/check' started with executable stack
	[  +2.712247] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethfaae05be
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a f9 5f 21 1c bd 08 06
	[  +1.484257] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth07e4b604
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 31 3e aa c4 4d 08 06
	[Jan27 03:16] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth3c97916f
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 16 68 a1 00 34 f0 08 06
	[ +29.713722] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethab942597
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 0a 29 19 cd 53 cf 08 06
	
	* 
	* ==> kernel <==
	*  03:17:10 up 59 min,  0 users,  load average: 7.67, 4.76, 2.48
	Linux running-upgrade-20220127031538-6703 5.11.0-1028-gcp #32~20.04.1-Ubuntu SMP Wed Jan 12 20:08:27 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.1 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-01-27 03:15:48 UTC, end at Thu 2022-01-27 03:17:10 UTC. --
	Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.916350    2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.916536    2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.916703    2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.916851    2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.917007    2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.917161    2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.917319    2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.917551    2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.917719    2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.917878    2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.918030    2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.918184    2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.918343    2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.918496    2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.918661    2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.918816    2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.918986    2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.919679    2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.919965    2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.920185    2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.920373    2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.920556    2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jan 27 03:17:06 running-upgrade-20220127031538-6703 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Jan 27 03:17:06 running-upgrade-20220127031538-6703 systemd[1]: kubelet.service: Succeeded.
	Jan 27 03:17:06 running-upgrade-20220127031538-6703 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p running-upgrade-20220127031538-6703 -n running-upgrade-20220127031538-6703
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p running-upgrade-20220127031538-6703 -n running-upgrade-20220127031538-6703: exit status 2 (604.029969ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context running-upgrade-20220127031538-6703 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: coredns-74ff55c5b-9mfqg etcd-running-upgrade-20220127031538-6703 kindnet-g9t9p kube-apiserver-running-upgrade-20220127031538-6703 kube-controller-manager-running-upgrade-20220127031538-6703 kube-proxy-d9kxz kube-scheduler-running-upgrade-20220127031538-6703 storage-provisioner
helpers_test.go:273: ======> post-mortem[TestRunningBinaryUpgrade]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context running-upgrade-20220127031538-6703 describe pod coredns-74ff55c5b-9mfqg etcd-running-upgrade-20220127031538-6703 kindnet-g9t9p kube-apiserver-running-upgrade-20220127031538-6703 kube-controller-manager-running-upgrade-20220127031538-6703 kube-proxy-d9kxz kube-scheduler-running-upgrade-20220127031538-6703 storage-provisioner
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context running-upgrade-20220127031538-6703 describe pod coredns-74ff55c5b-9mfqg etcd-running-upgrade-20220127031538-6703 kindnet-g9t9p kube-apiserver-running-upgrade-20220127031538-6703 kube-controller-manager-running-upgrade-20220127031538-6703 kube-proxy-d9kxz kube-scheduler-running-upgrade-20220127031538-6703 storage-provisioner: exit status 1 (93.801982ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-74ff55c5b-9mfqg" not found
	Error from server (NotFound): pods "etcd-running-upgrade-20220127031538-6703" not found
	Error from server (NotFound): pods "kindnet-g9t9p" not found
	Error from server (NotFound): pods "kube-apiserver-running-upgrade-20220127031538-6703" not found
	Error from server (NotFound): pods "kube-controller-manager-running-upgrade-20220127031538-6703" not found
	Error from server (NotFound): pods "kube-proxy-d9kxz" not found
	Error from server (NotFound): pods "kube-scheduler-running-upgrade-20220127031538-6703" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context running-upgrade-20220127031538-6703 describe pod coredns-74ff55c5b-9mfqg etcd-running-upgrade-20220127031538-6703 kindnet-g9t9p kube-apiserver-running-upgrade-20220127031538-6703 kube-controller-manager-running-upgrade-20220127031538-6703 kube-proxy-d9kxz kube-scheduler-running-upgrade-20220127031538-6703 storage-provisioner: exit status 1
helpers_test.go:176: Cleaning up "running-upgrade-20220127031538-6703" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20220127031538-6703

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20220127031538-6703: (2.717611934s)
--- FAIL: TestRunningBinaryUpgrade (95.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (11.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220127031714-6703 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-20220127031714-6703 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: exit status 80 (10.835789911s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220127031714-6703] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13251
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Kubernetes 1.23.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.2
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node old-k8s-version-20220127031714-6703 in cluster old-k8s-version-20220127031714-6703
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-20220127031714-6703" ...
	* Restarting existing docker container for "old-k8s-version-20220127031714-6703" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 03:23:22.928129  234652 out.go:297] Setting OutFile to fd 1 ...
	I0127 03:23:22.928205  234652 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0127 03:23:22.928211  234652 out.go:310] Setting ErrFile to fd 2...
	I0127 03:23:22.928216  234652 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0127 03:23:22.928324  234652 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/bin
	I0127 03:23:22.928574  234652 out.go:304] Setting JSON to false
	I0127 03:23:22.930435  234652 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3957,"bootTime":1643249846,"procs":1051,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1028-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 03:23:22.930510  234652 start.go:122] virtualization: kvm guest
	I0127 03:23:22.932965  234652 out.go:176] * [old-k8s-version-20220127031714-6703] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	I0127 03:23:22.934623  234652 out.go:176]   - MINIKUBE_LOCATION=13251
	I0127 03:23:22.933122  234652 notify.go:174] Checking for updates...
	I0127 03:23:22.936017  234652 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 03:23:22.937573  234652 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
	I0127 03:23:22.939058  234652 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube
	I0127 03:23:22.940680  234652 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 03:23:22.941117  234652 config.go:176] Loaded profile config "old-k8s-version-20220127031714-6703": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0127 03:23:22.943022  234652 out.go:176] * Kubernetes 1.23.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.2
	I0127 03:23:22.943056  234652 driver.go:344] Setting default libvirt URI to qemu:///system
	I0127 03:23:22.985166  234652 docker.go:132] docker version: linux-20.10.12
	I0127 03:23:22.985263  234652 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0127 03:23:23.080096  234652 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-01-27 03:23:23.016313233 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1028-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668890624 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0127 03:23:23.080250  234652 docker.go:237] overlay module found
	I0127 03:23:23.082592  234652 out.go:176] * Using the docker driver based on existing profile
	I0127 03:23:23.082615  234652 start.go:281] selected driver: docker
	I0127 03:23:23.082620  234652 start.go:798] validating driver "docker" against &{Name:old-k8s-version-20220127031714-6703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220127031714-6703 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true ku
belet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0127 03:23:23.082716  234652 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0127 03:23:23.082744  234652 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0127 03:23:23.082765  234652 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0127 03:23:23.084967  234652 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0127 03:23:23.085544  234652 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0127 03:23:23.179040  234652 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2022-01-27 03:23:23.114524125 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1028-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668890624 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	W0127 03:23:23.179205  234652 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0127 03:23:23.179231  234652 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0127 03:23:23.181377  234652 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0127 03:23:23.181474  234652 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 03:23:23.181498  234652 cni.go:93] Creating CNI manager for ""
	I0127 03:23:23.181505  234652 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0127 03:23:23.181517  234652 start_flags.go:302] config:
	{Name:old-k8s-version-20220127031714-6703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220127031714-6703 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:conta
inerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop
:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0127 03:23:23.183375  234652 out.go:176] * Starting control plane node old-k8s-version-20220127031714-6703 in cluster old-k8s-version-20220127031714-6703
	I0127 03:23:23.183414  234652 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0127 03:23:23.184930  234652 out.go:176] * Pulling base image ...
	I0127 03:23:23.184953  234652 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0127 03:23:23.184978  234652 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0127 03:23:23.184994  234652 cache.go:57] Caching tarball of preloaded images
	I0127 03:23:23.184996  234652 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I0127 03:23:23.185196  234652 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 03:23:23.185231  234652 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0127 03:23:23.185350  234652 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/old-k8s-version-20220127031714-6703/config.json ...
	I0127 03:23:23.223019  234652 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I0127 03:23:23.223055  234652 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I0127 03:23:23.223074  234652 cache.go:208] Successfully downloaded all kic artifacts
	I0127 03:23:23.223134  234652 start.go:313] acquiring machines lock for old-k8s-version-20220127031714-6703: {Name:mk9f3d8148e082b2bd0283e0aa320c97d8b55cef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 03:23:23.223234  234652 start.go:317] acquired machines lock for "old-k8s-version-20220127031714-6703" in 73.124µs
	I0127 03:23:23.223269  234652 start.go:93] Skipping create...Using existing machine configuration
	I0127 03:23:23.223279  234652 fix.go:55] fixHost starting: 
	I0127 03:23:23.223511  234652 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220127031714-6703 --format={{.State.Status}}
	I0127 03:23:23.258857  234652 fix.go:108] recreateIfNeeded on old-k8s-version-20220127031714-6703: state=Stopped err=<nil>
	W0127 03:23:23.258891  234652 fix.go:134] unexpected machine state, will restart: <nil>
	I0127 03:23:23.261626  234652 out.go:176] * Restarting existing docker container for "old-k8s-version-20220127031714-6703" ...
	I0127 03:23:23.261691  234652 cli_runner.go:133] Run: docker start old-k8s-version-20220127031714-6703
	W0127 03:23:23.315446  234652 cli_runner.go:180] docker start old-k8s-version-20220127031714-6703 returned with exit code 1
	I0127 03:23:23.315550  234652 cli_runner.go:133] Run: docker inspect old-k8s-version-20220127031714-6703
	I0127 03:23:23.353357  234652 errors.go:84] Postmortem inspect ("docker inspect old-k8s-version-20220127031714-6703"): -- stdout --
	[
	    {
	        "Id": "91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73",
	        "Created": "2022-01-27T03:20:46.660393198Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 137,
	            "Error": "network 75758b286fdf1dad635bb8a0f3fdd8dd7a6e676c24baa1342fb095c73ec27f87 not found",
	            "StartedAt": "2022-01-27T03:20:47.166723014Z",
	            "FinishedAt": "2022-01-27T03:23:21.951827406Z"
	        },
	        "Image": "sha256:64d09634c60d2a75596bc705aa84bdc41f76fe47c5d9ee362550bffbdc256979",
	        "ResolvConfPath": "/var/lib/docker/containers/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73/hostname",
	        "HostsPath": "/var/lib/docker/containers/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73/hosts",
	        "LogPath": "/var/lib/docker/containers/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73-json.log",
	        "Name": "/old-k8s-version-20220127031714-6703",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220127031714-6703:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220127031714-6703",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/970e93133d8fce1e1cc56e1ea44d6d0e67fcb8d32198cf7770f3898c71765d9b-init/diff:/var/lib/docker/overlay2/16963715367ada0e39ac0ff75961ba8f4191279cc4ff8b9fd9fd28a2afd586a9/diff:/var/lib/docker/overlay2/cf3bc92115c85438ae77ffc60f5cb639c3968a2c050d4a65b31cd6fccfa41416/diff:/var/lib/docker/overlay2/218e751ff7f51ec2e7fadb35a20c5a8ea67c3b1fd9c070ad8816e2ab30fdd9e4/diff:/var/lib/docker/overlay2/bf9ba576be8949fd04e2902858d2da23ce5cd4ba75e8e532dcf79ee7b42b1908/diff:/var/lib/docker/overlay2/8a96f75d3963c09f4f38ba2dc19523e12f658d2ed81533b22cf9cd9057083153/diff:/var/lib/docker/overlay2/5ee8bc1f5021db1aa75bbf3fb3060fab633d7e9f5a618431b3dc5d8a40fbdd2f/diff:/var/lib/docker/overlay2/c2d261af5836940de0b3aff73ea5e1328576bb1a856847be0e7afced9977e7d6/diff:/var/lib/docker/overlay2/14d653eab7bf5140d2e59be7764608906b78b45fc00c1618ff27987917119e60/diff:/var/lib/docker/overlay2/98223f44c4eceaa2e2d3f5d58e0fedda224657dcca26fb238aa0adec9753395d/diff:/var/lib/docker/overlay2/281afd
7c879ee9ec2ed091125655b6d2471ebfba9d70f828f16d582ccd389d51/diff:/var/lib/docker/overlay2/051e1076d1ab90fd612cf6a1357f63367f2661aaf90bc242f4e0bc93ce78a37b/diff:/var/lib/docker/overlay2/f6a47c5c1b4fde4d27c0d8f0a804be3d88995b705c5683976ac96ab692e5d79c/diff:/var/lib/docker/overlay2/ea4bae1904f2e590771d839dff6c6213846ccc1cc1fbaadf7444396e4dcc2a17/diff:/var/lib/docker/overlay2/789935353e9a43b8382bc8319c2f7f598462bd83a26a3c822d3481f62ff3afe3/diff:/var/lib/docker/overlay2/8ba6338f7d22201db727697f81bb55a260d179c2853681b855cfe641eaaf5f44/diff:/var/lib/docker/overlay2/3afbc0354287f271a5fa34e1ce8c58bac282ecb529fae2e3fc083a2e11b9a108/diff:/var/lib/docker/overlay2/6d064d2ddd73c69a0ca655bd5c632e1fddeddb1ad03016ca650f2b2e18a7f5dd/diff:/var/lib/docker/overlay2/8caadb592a9d04201e0a2fbb7a839bee7e5fea75f250ae661ce62dccc9a6439a/diff:/var/lib/docker/overlay2/7f97926f14f19d98861be9f0b5ea4321e0073fda2b4246addff2babb6782e2cd/diff:/var/lib/docker/overlay2/f97aaf1f58df74aae41d4989c1c2bcbe345d3d495d617cf92c16b7b082951549/diff:/var/lib/d
ocker/overlay2/c94c523d52a95a2bb2b25c99d07f159fd33fc79a55a85dc98e3fac825ce6ef30/diff:/var/lib/docker/overlay2/ac1d2ace06b5b555e3c895bd7786df4f8cee55531f1ea226fbf3c3d5582d3f29/diff:/var/lib/docker/overlay2/309519db7300c42aa8c54419ffa576f100db793a69a4e5fe3109ca02e14b8e50/diff:/var/lib/docker/overlay2/6f64c8673ef7362fcd7d55d0c19c0830a5c31c3841ae4ee69e75cd25a74a5048/diff:/var/lib/docker/overlay2/c08216237e476a12f55f3c1bafafd366c6a53eaf799965fd0ad88d097eedbcbf/diff:/var/lib/docker/overlay2/a870da1a92b7f0cb45baa7d95ac8cbd143a39082f8958524958f0d802a216e62/diff:/var/lib/docker/overlay2/88e7ec21e8af05b1015ba035f3860188f9bffa0e509a67ffa4267d7c7a76e972/diff:/var/lib/docker/overlay2/531e06b694b717bab79b354999b4f4912499b6421997a49bce26d9e82f9a3754/diff:/var/lib/docker/overlay2/3edc6927179b5caaefdff0badec36d6cbe41a8420a77adf24018241e26e6b603/diff:/var/lib/docker/overlay2/4b8a0abd7fe49430e3a1a29c98a3b3e3323c0fadc5b1311e26b612a06f8f76ae/diff:/var/lib/docker/overlay2/d3f5099daf876cc2cdbd27f60cd147c3c916036d04752ada6a17bd13210
f19e2/diff:/var/lib/docker/overlay2/2679ad4fb25b15275ad72b787f695d7e12948cab8b6f4ec2d6a61df2e0fcff7f/diff:/var/lib/docker/overlay2/e628f10038c6dee7c1f2a72d6abe7d1e8af2d38114365290918485e6ac95b313/diff:/var/lib/docker/overlay2/b07fb4ed2e44e92c06fd6500a3a666d60418960b7a1bcb8ebc7a6bb8d06dee11/diff:/var/lib/docker/overlay2/cfa3d2dac7804a585841fcf779597c2da336e152637f929ce76bed23499c23cc/diff:/var/lib/docker/overlay2/dced6c5820d9485d9b1f29a8f74920f7d509f513758a2816be4cb2c4f63bb242/diff:/var/lib/docker/overlay2/d8ecc913e96f9de3f718a3a063e0e5fa9e116c77aff94a32616bdb00bd7aac7f/diff:/var/lib/docker/overlay2/b6ac33633267400f1aa77b5b17c69fc7527f3d0ed4dfbd41fb003c94505ea310/diff:/var/lib/docker/overlay2/bbc9c4b2e5f00714c3142098bcdb97dedd2ca84f7a19455dfeeff55a95beffd4/diff:/var/lib/docker/overlay2/84944472d3651e44a20fd3fa72bed747da3567af8ebf5db89f6200326fa8ac7c/diff:/var/lib/docker/overlay2/a3fbfb702a1204c83d8fcd07aeba21a7139c0f98a642a9049754275c50b0d89b/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/970e93133d8fce1e1cc56e1ea44d6d0e67fcb8d32198cf7770f3898c71765d9b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/970e93133d8fce1e1cc56e1ea44d6d0e67fcb8d32198cf7770f3898c71765d9b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/970e93133d8fce1e1cc56e1ea44d6d0e67fcb8d32198cf7770f3898c71765d9b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220127031714-6703",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220127031714-6703/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220127031714-6703",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220127031714-6703",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220127031714-6703",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a1f42216df58be0f0ab6a0ffcecf614684c3782f227f54c0afcbc1fb8c8902e4",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/a1f42216df58",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220127031714-6703": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "91f0a9797496",
	                        "old-k8s-version-20220127031714-6703"
	                    ],
	                    "NetworkID": "75758b286fdf1dad635bb8a0f3fdd8dd7a6e676c24baa1342fb095c73ec27f87",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]
	
	-- /stdout --
	I0127 03:23:23.353426  234652 cli_runner.go:133] Run: docker logs --timestamps --details old-k8s-version-20220127031714-6703
	I0127 03:23:23.396984  234652 errors.go:91] Postmortem logs ("docker logs --timestamps --details old-k8s-version-20220127031714-6703"): -- stdout --
	2022-01-27T03:20:47.166845292Z  + userns=
	2022-01-27T03:20:47.166883146Z  + grep -Eqv '0[[:space:]]+0[[:space:]]+4294967295' /proc/self/uid_map
	2022-01-27T03:20:47.169409854Z  + validate_userns
	2022-01-27T03:20:47.170823893Z  + [[ -z '' ]]
	2022-01-27T03:20:47.170835684Z  + return
	2022-01-27T03:20:47.170839958Z  + configure_containerd
	2022-01-27T03:20:47.170850373Z  ++ stat -f -c %T /kind
	2022-01-27T03:20:47.170866395Z  + [[ overlayfs == \z\f\s ]]
	2022-01-27T03:20:47.170871613Z  + configure_proxy
	2022-01-27T03:20:47.170875658Z  + mkdir -p /etc/systemd/system.conf.d/
	2022-01-27T03:20:47.172134379Z  + [[ ! -z '' ]]
	2022-01-27T03:20:47.172163426Z  + cat
	2022-01-27T03:20:47.173224503Z  + fix_kmsg
	2022-01-27T03:20:47.173236899Z  + [[ ! -e /dev/kmsg ]]
	2022-01-27T03:20:47.173239971Z  + fix_mount
	2022-01-27T03:20:47.173242582Z  + echo 'INFO: ensuring we can execute mount/umount even with userns-remap'
	2022-01-27T03:20:47.173245651Z  INFO: ensuring we can execute mount/umount even with userns-remap
	2022-01-27T03:20:47.173588664Z  ++ which mount
	2022-01-27T03:20:47.174746219Z  ++ which umount
	2022-01-27T03:20:47.175526186Z  + chown root:root /usr/bin/mount /usr/bin/umount
	2022-01-27T03:20:47.181775358Z  ++ which mount
	2022-01-27T03:20:47.183136898Z  ++ which umount
	2022-01-27T03:20:47.183661909Z  + chmod -s /usr/bin/mount /usr/bin/umount
	2022-01-27T03:20:47.185066971Z  +++ which mount
	2022-01-27T03:20:47.185807748Z  ++ stat -f -c %T /usr/bin/mount
	2022-01-27T03:20:47.186686911Z  + [[ overlayfs == \a\u\f\s ]]
	2022-01-27T03:20:47.186702171Z  + [[ -z '' ]]
	2022-01-27T03:20:47.186706541Z  + echo 'INFO: remounting /sys read-only'
	2022-01-27T03:20:47.186709932Z  INFO: remounting /sys read-only
	2022-01-27T03:20:47.186713110Z  + mount -o remount,ro /sys
	2022-01-27T03:20:47.188442209Z  + echo 'INFO: making mounts shared'
	2022-01-27T03:20:47.188457315Z  INFO: making mounts shared
	2022-01-27T03:20:47.188461681Z  + mount --make-rshared /
	2022-01-27T03:20:47.189625028Z  + retryable_fix_cgroup
	2022-01-27T03:20:47.189915785Z  ++ seq 0 10
	2022-01-27T03:20:47.190531866Z  + for i in $(seq 0 10)
	2022-01-27T03:20:47.190545483Z  + fix_cgroup
	2022-01-27T03:20:47.190549299Z  + [[ -f /sys/fs/cgroup/cgroup.controllers ]]
	2022-01-27T03:20:47.190553206Z  + echo 'INFO: detected cgroup v1'
	2022-01-27T03:20:47.190556960Z  INFO: detected cgroup v1
	2022-01-27T03:20:47.190560711Z  + echo 'INFO: fix cgroup mounts for all subsystems'
	2022-01-27T03:20:47.190564243Z  INFO: fix cgroup mounts for all subsystems
	2022-01-27T03:20:47.190579095Z  + local current_cgroup
	2022-01-27T03:20:47.191242527Z  ++ grep -E '^[^:]*:([^:]*,)?cpu(,[^,:]*)?:.*' /proc/self/cgroup
	2022-01-27T03:20:47.191384525Z  ++ cut -d: -f3
	2022-01-27T03:20:47.192509710Z  + current_cgroup=/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.192533255Z  + local cgroup_subsystems
	2022-01-27T03:20:47.193083746Z  ++ findmnt -lun -o source,target -t cgroup
	2022-01-27T03:20:47.193286746Z  ++ grep /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.193297825Z  ++ awk '{print $2}'
	2022-01-27T03:20:47.195545193Z  + cgroup_subsystems='/sys/fs/cgroup/systemd
	2022-01-27T03:20:47.195558735Z  /sys/fs/cgroup/devices
	2022-01-27T03:20:47.195562580Z  /sys/fs/cgroup/cpuset
	2022-01-27T03:20:47.195566179Z  /sys/fs/cgroup/cpu,cpuacct
	2022-01-27T03:20:47.195569759Z  /sys/fs/cgroup/net_cls,net_prio
	2022-01-27T03:20:47.195572564Z  /sys/fs/cgroup/perf_event
	2022-01-27T03:20:47.195575904Z  /sys/fs/cgroup/memory
	2022-01-27T03:20:47.195579307Z  /sys/fs/cgroup/freezer
	2022-01-27T03:20:47.195582520Z  /sys/fs/cgroup/blkio
	2022-01-27T03:20:47.195586101Z  /sys/fs/cgroup/pids
	2022-01-27T03:20:47.195589285Z  /sys/fs/cgroup/hugetlb'
	2022-01-27T03:20:47.195592999Z  + local cgroup_mounts
	2022-01-27T03:20:47.195951096Z  ++ grep -E -o '/[[:alnum:]].* /sys/fs/cgroup.*.*cgroup' /proc/self/mountinfo
	2022-01-27T03:20:47.197335264Z  + cgroup_mounts='/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:377 master:9 - cgroup cgroup
	2022-01-27T03:20:47.197348710Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:389 master:16 - cgroup cgroup
	2022-01-27T03:20:47.197353434Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:390 master:17 - cgroup cgroup
	2022-01-27T03:20:47.197357215Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime shared:391 master:18 - cgroup cgroup
	2022-01-27T03:20:47.197361482Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime shared:392 master:19 - cgroup cgroup
	2022-01-27T03:20:47.197365475Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:393 master:20 - cgroup cgroup
	2022-01-27T03:20:47.197369534Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:394 master:21 - cgroup cgroup
	2022-01-27T03:20:47.197373299Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:395 master:22 - cgroup cgroup
	2022-01-27T03:20:47.197387734Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:396 master:23 - cgroup cgroup
	2022-01-27T03:20:47.197391776Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:398 master:25 - cgroup cgroup
	2022-01-27T03:20:47.197395535Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:402 master:26 - cgroup cgroup'
	2022-01-27T03:20:47.197406287Z  + [[ -n /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:377 master:9 - cgroup cgroup
	2022-01-27T03:20:47.197422562Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:389 master:16 - cgroup cgroup
	2022-01-27T03:20:47.197427246Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:390 master:17 - cgroup cgroup
	2022-01-27T03:20:47.197436437Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime shared:391 master:18 - cgroup cgroup
	2022-01-27T03:20:47.197440239Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime shared:392 master:19 - cgroup cgroup
	2022-01-27T03:20:47.197443955Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:393 master:20 - cgroup cgroup
	2022-01-27T03:20:47.197447789Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:394 master:21 - cgroup cgroup
	2022-01-27T03:20:47.197451729Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:395 master:22 - cgroup cgroup
	2022-01-27T03:20:47.197455713Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:396 master:23 - cgroup cgroup
	2022-01-27T03:20:47.197459669Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:398 master:25 - cgroup cgroup
	2022-01-27T03:20:47.197463472Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:402 master:26 - cgroup cgroup ]]
	2022-01-27T03:20:47.197467361Z  + local mount_root
	2022-01-27T03:20:47.198054455Z  ++ head -n 1
	2022-01-27T03:20:47.198103330Z  ++ cut '-d ' -f1
	2022-01-27T03:20:47.199051645Z  + mount_root=/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.199662673Z  ++ echo '/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:377 master:9 - cgroup cgroup
	2022-01-27T03:20:47.199685710Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:389 master:16 - cgroup cgroup
	2022-01-27T03:20:47.199690432Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:390 master:17 - cgroup cgroup
	2022-01-27T03:20:47.199694524Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime shared:391 master:18 - cgroup cgroup
	2022-01-27T03:20:47.199698333Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime shared:392 master:19 - cgroup cgroup
	2022-01-27T03:20:47.199702114Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:393 master:20 - cgroup cgroup
	2022-01-27T03:20:47.199705822Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:394 master:21 - cgroup cgroup
	2022-01-27T03:20:47.199709341Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:395 master:22 - cgroup cgroup
	2022-01-27T03:20:47.199712928Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:396 master:23 - cgroup cgroup
	2022-01-27T03:20:47.199716629Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:398 master:25 - cgroup cgroup
	2022-01-27T03:20:47.199720928Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:402 master:26 - cgroup cgroup'
	2022-01-27T03:20:47.199730407Z  ++ cut '-d ' -f 2
	2022-01-27T03:20:47.200613271Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-01-27T03:20:47.200623940Z  + local target=/sys/fs/cgroup/systemd/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.200628293Z  + findmnt /sys/fs/cgroup/systemd/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.202331614Z  + mkdir -p /sys/fs/cgroup/systemd/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.203277191Z  + mount --bind /sys/fs/cgroup/systemd /sys/fs/cgroup/systemd/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.204649010Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-01-27T03:20:47.204662934Z  + local target=/sys/fs/cgroup/devices/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.204666890Z  + findmnt /sys/fs/cgroup/devices/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.206655257Z  + mkdir -p /sys/fs/cgroup/devices/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.207563679Z  + mount --bind /sys/fs/cgroup/devices /sys/fs/cgroup/devices/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.208759242Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-01-27T03:20:47.208777968Z  + local target=/sys/fs/cgroup/cpuset/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.208782352Z  + findmnt /sys/fs/cgroup/cpuset/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.210376065Z  + mkdir -p /sys/fs/cgroup/cpuset/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.259706772Z  + mount --bind /sys/fs/cgroup/cpuset /sys/fs/cgroup/cpuset/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.261148341Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-01-27T03:20:47.261165739Z  + local target=/sys/fs/cgroup/cpu,cpuacct/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.261170292Z  + findmnt /sys/fs/cgroup/cpu,cpuacct/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.263012899Z  + mkdir -p /sys/fs/cgroup/cpu,cpuacct/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.264161123Z  + mount --bind /sys/fs/cgroup/cpu,cpuacct /sys/fs/cgroup/cpu,cpuacct/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.265384063Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-01-27T03:20:47.265394611Z  + local target=/sys/fs/cgroup/net_cls,net_prio/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.265399019Z  + findmnt /sys/fs/cgroup/net_cls,net_prio/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.267127447Z  + mkdir -p /sys/fs/cgroup/net_cls,net_prio/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.268079348Z  + mount --bind /sys/fs/cgroup/net_cls,net_prio /sys/fs/cgroup/net_cls,net_prio/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.269387603Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-01-27T03:20:47.269401641Z  + local target=/sys/fs/cgroup/perf_event/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.269406079Z  + findmnt /sys/fs/cgroup/perf_event/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.270879668Z  + mkdir -p /sys/fs/cgroup/perf_event/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.271988753Z  + mount --bind /sys/fs/cgroup/perf_event /sys/fs/cgroup/perf_event/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.273326357Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-01-27T03:20:47.273339615Z  + local target=/sys/fs/cgroup/memory/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.273354827Z  + findmnt /sys/fs/cgroup/memory/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.274858641Z  + mkdir -p /sys/fs/cgroup/memory/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.275906644Z  + mount --bind /sys/fs/cgroup/memory /sys/fs/cgroup/memory/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.277070800Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-01-27T03:20:47.277079315Z  + local target=/sys/fs/cgroup/freezer/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.277081956Z  + findmnt /sys/fs/cgroup/freezer/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.278837139Z  + mkdir -p /sys/fs/cgroup/freezer/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.279861337Z  + mount --bind /sys/fs/cgroup/freezer /sys/fs/cgroup/freezer/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.281007114Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-01-27T03:20:47.281018441Z  + local target=/sys/fs/cgroup/blkio/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.281021221Z  + findmnt /sys/fs/cgroup/blkio/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.282612267Z  + mkdir -p /sys/fs/cgroup/blkio/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.283669944Z  + mount --bind /sys/fs/cgroup/blkio /sys/fs/cgroup/blkio/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.284877479Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-01-27T03:20:47.284886623Z  + local target=/sys/fs/cgroup/pids/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.284951024Z  + findmnt /sys/fs/cgroup/pids/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.286583277Z  + mkdir -p /sys/fs/cgroup/pids/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.287884254Z  + mount --bind /sys/fs/cgroup/pids /sys/fs/cgroup/pids/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.289105953Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-01-27T03:20:47.289129388Z  + local target=/sys/fs/cgroup/hugetlb/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.289133947Z  + findmnt /sys/fs/cgroup/hugetlb/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.290729543Z  + mkdir -p /sys/fs/cgroup/hugetlb/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.291974656Z  + mount --bind /sys/fs/cgroup/hugetlb /sys/fs/cgroup/hugetlb/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.293116875Z  + mount --make-rprivate /sys/fs/cgroup
	2022-01-27T03:20:47.295690418Z  + echo '/sys/fs/cgroup/systemd
	2022-01-27T03:20:47.295706180Z  /sys/fs/cgroup/devices
	2022-01-27T03:20:47.295710899Z  /sys/fs/cgroup/cpuset
	2022-01-27T03:20:47.295714509Z  /sys/fs/cgroup/cpu,cpuacct
	2022-01-27T03:20:47.295718168Z  /sys/fs/cgroup/net_cls,net_prio
	2022-01-27T03:20:47.295721688Z  /sys/fs/cgroup/perf_event
	2022-01-27T03:20:47.295725097Z  /sys/fs/cgroup/memory
	2022-01-27T03:20:47.295728372Z  /sys/fs/cgroup/freezer
	2022-01-27T03:20:47.295731403Z  /sys/fs/cgroup/blkio
	2022-01-27T03:20:47.295734418Z  /sys/fs/cgroup/pids
	2022-01-27T03:20:47.295737598Z  /sys/fs/cgroup/hugetlb'
	2022-01-27T03:20:47.295740548Z  + IFS=
	2022-01-27T03:20:47.295744030Z  + read -r subsystem
	2022-01-27T03:20:47.295747451Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/systemd
	2022-01-27T03:20:47.295751040Z  + local cgroup_root=/kubelet
	2022-01-27T03:20:47.295754529Z  + local subsystem=/sys/fs/cgroup/systemd
	2022-01-27T03:20:47.295757536Z  + '[' -z /kubelet ']'
	2022-01-27T03:20:47.295760523Z  + mkdir -p /sys/fs/cgroup/systemd//kubelet
	2022-01-27T03:20:47.296892202Z  + '[' /sys/fs/cgroup/systemd == /sys/fs/cgroup/cpuset ']'
	2022-01-27T03:20:47.296906443Z  + mount --bind /sys/fs/cgroup/systemd//kubelet /sys/fs/cgroup/systemd//kubelet
	2022-01-27T03:20:47.298369055Z  + IFS=
	2022-01-27T03:20:47.298382629Z  + read -r subsystem
	2022-01-27T03:20:47.298387441Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/devices
	2022-01-27T03:20:47.298390968Z  + local cgroup_root=/kubelet
	2022-01-27T03:20:47.298418429Z  + local subsystem=/sys/fs/cgroup/devices
	2022-01-27T03:20:47.298437195Z  + '[' -z /kubelet ']'
	2022-01-27T03:20:47.298440968Z  + mkdir -p /sys/fs/cgroup/devices//kubelet
	2022-01-27T03:20:47.299620974Z  + '[' /sys/fs/cgroup/devices == /sys/fs/cgroup/cpuset ']'
	2022-01-27T03:20:47.299635580Z  + mount --bind /sys/fs/cgroup/devices//kubelet /sys/fs/cgroup/devices//kubelet
	2022-01-27T03:20:47.300902854Z  + IFS=
	2022-01-27T03:20:47.300915676Z  + read -r subsystem
	2022-01-27T03:20:47.300918686Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/cpuset
	2022-01-27T03:20:47.300922234Z  + local cgroup_root=/kubelet
	2022-01-27T03:20:47.300953731Z  + local subsystem=/sys/fs/cgroup/cpuset
	2022-01-27T03:20:47.300962672Z  + '[' -z /kubelet ']'
	2022-01-27T03:20:47.300966552Z  + mkdir -p /sys/fs/cgroup/cpuset//kubelet
	2022-01-27T03:20:47.302086310Z  + '[' /sys/fs/cgroup/cpuset == /sys/fs/cgroup/cpuset ']'
	2022-01-27T03:20:47.302097190Z  + cat /sys/fs/cgroup/cpuset/cpuset.cpus
	2022-01-27T03:20:47.303871580Z  + cat /sys/fs/cgroup/cpuset/cpuset.mems
	2022-01-27T03:20:47.304694044Z  + mount --bind /sys/fs/cgroup/cpuset//kubelet /sys/fs/cgroup/cpuset//kubelet
	2022-01-27T03:20:47.306158899Z  + IFS=
	2022-01-27T03:20:47.306170750Z  + read -r subsystem
	2022-01-27T03:20:47.306175584Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/cpu,cpuacct
	2022-01-27T03:20:47.306179981Z  + local cgroup_root=/kubelet
	2022-01-27T03:20:47.306183056Z  + local subsystem=/sys/fs/cgroup/cpu,cpuacct
	2022-01-27T03:20:47.306225556Z  + '[' -z /kubelet ']'
	2022-01-27T03:20:47.306239251Z  + mkdir -p /sys/fs/cgroup/cpu,cpuacct//kubelet
	2022-01-27T03:20:47.307277423Z  + '[' /sys/fs/cgroup/cpu,cpuacct == /sys/fs/cgroup/cpuset ']'
	2022-01-27T03:20:47.307289046Z  + mount --bind /sys/fs/cgroup/cpu,cpuacct//kubelet /sys/fs/cgroup/cpu,cpuacct//kubelet
	2022-01-27T03:20:47.308611189Z  + IFS=
	2022-01-27T03:20:47.308633822Z  + read -r subsystem
	2022-01-27T03:20:47.308639496Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/net_cls,net_prio
	2022-01-27T03:20:47.308643661Z  + local cgroup_root=/kubelet
	2022-01-27T03:20:47.308647306Z  + local subsystem=/sys/fs/cgroup/net_cls,net_prio
	2022-01-27T03:20:47.308651077Z  + '[' -z /kubelet ']'
	2022-01-27T03:20:47.308655719Z  + mkdir -p /sys/fs/cgroup/net_cls,net_prio//kubelet
	2022-01-27T03:20:47.309653104Z  + '[' /sys/fs/cgroup/net_cls,net_prio == /sys/fs/cgroup/cpuset ']'
	2022-01-27T03:20:47.309665051Z  + mount --bind /sys/fs/cgroup/net_cls,net_prio//kubelet /sys/fs/cgroup/net_cls,net_prio//kubelet
	2022-01-27T03:20:47.310897882Z  + IFS=
	2022-01-27T03:20:47.310908734Z  + read -r subsystem
	2022-01-27T03:20:47.310911709Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/perf_event
	2022-01-27T03:20:47.310914493Z  + local cgroup_root=/kubelet
	2022-01-27T03:20:47.310916724Z  + local subsystem=/sys/fs/cgroup/perf_event
	2022-01-27T03:20:47.310924121Z  + '[' -z /kubelet ']'
	2022-01-27T03:20:47.310928085Z  + mkdir -p /sys/fs/cgroup/perf_event//kubelet
	2022-01-27T03:20:47.312030634Z  + '[' /sys/fs/cgroup/perf_event == /sys/fs/cgroup/cpuset ']'
	2022-01-27T03:20:47.312043985Z  + mount --bind /sys/fs/cgroup/perf_event//kubelet /sys/fs/cgroup/perf_event//kubelet
	2022-01-27T03:20:47.315193208Z  + IFS=
	2022-01-27T03:20:47.315207758Z  + read -r subsystem
	2022-01-27T03:20:47.315271535Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/memory
	2022-01-27T03:20:47.315277951Z  + local cgroup_root=/kubelet
	2022-01-27T03:20:47.315281649Z  + local subsystem=/sys/fs/cgroup/memory
	2022-01-27T03:20:47.315285203Z  + '[' -z /kubelet ']'
	2022-01-27T03:20:47.315339864Z  + mkdir -p /sys/fs/cgroup/memory//kubelet
	2022-01-27T03:20:47.316742379Z  + '[' /sys/fs/cgroup/memory == /sys/fs/cgroup/cpuset ']'
	2022-01-27T03:20:47.316753035Z  + mount --bind /sys/fs/cgroup/memory//kubelet /sys/fs/cgroup/memory//kubelet
	2022-01-27T03:20:47.318072269Z  + IFS=
	2022-01-27T03:20:47.318083510Z  + read -r subsystem
	2022-01-27T03:20:47.318088055Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/freezer
	2022-01-27T03:20:47.318092038Z  + local cgroup_root=/kubelet
	2022-01-27T03:20:47.318095762Z  + local subsystem=/sys/fs/cgroup/freezer
	2022-01-27T03:20:47.318099735Z  + '[' -z /kubelet ']'
	2022-01-27T03:20:47.318105182Z  + mkdir -p /sys/fs/cgroup/freezer//kubelet
	2022-01-27T03:20:47.319459076Z  + '[' /sys/fs/cgroup/freezer == /sys/fs/cgroup/cpuset ']'
	2022-01-27T03:20:47.319475907Z  + mount --bind /sys/fs/cgroup/freezer//kubelet /sys/fs/cgroup/freezer//kubelet
	2022-01-27T03:20:47.320698374Z  + IFS=
	2022-01-27T03:20:47.320711820Z  + read -r subsystem
	2022-01-27T03:20:47.320716068Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/blkio
	2022-01-27T03:20:47.320767918Z  + local cgroup_root=/kubelet
	2022-01-27T03:20:47.320782063Z  + local subsystem=/sys/fs/cgroup/blkio
	2022-01-27T03:20:47.320786498Z  + '[' -z /kubelet ']'
	2022-01-27T03:20:47.320790115Z  + mkdir -p /sys/fs/cgroup/blkio//kubelet
	2022-01-27T03:20:47.321774440Z  + '[' /sys/fs/cgroup/blkio == /sys/fs/cgroup/cpuset ']'
	2022-01-27T03:20:47.321787521Z  + mount --bind /sys/fs/cgroup/blkio//kubelet /sys/fs/cgroup/blkio//kubelet
	2022-01-27T03:20:47.323129600Z  + IFS=
	2022-01-27T03:20:47.323143413Z  + read -r subsystem
	2022-01-27T03:20:47.323147550Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/pids
	2022-01-27T03:20:47.323151225Z  + local cgroup_root=/kubelet
	2022-01-27T03:20:47.323189316Z  + local subsystem=/sys/fs/cgroup/pids
	2022-01-27T03:20:47.323196491Z  + '[' -z /kubelet ']'
	2022-01-27T03:20:47.323200195Z  + mkdir -p /sys/fs/cgroup/pids//kubelet
	2022-01-27T03:20:47.324231902Z  + '[' /sys/fs/cgroup/pids == /sys/fs/cgroup/cpuset ']'
	2022-01-27T03:20:47.324244329Z  + mount --bind /sys/fs/cgroup/pids//kubelet /sys/fs/cgroup/pids//kubelet
	2022-01-27T03:20:47.325570365Z  + IFS=
	2022-01-27T03:20:47.325584501Z  + read -r subsystem
	2022-01-27T03:20:47.325588834Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/hugetlb
	2022-01-27T03:20:47.325592770Z  + local cgroup_root=/kubelet
	2022-01-27T03:20:47.325598076Z  + local subsystem=/sys/fs/cgroup/hugetlb
	2022-01-27T03:20:47.325602059Z  + '[' -z /kubelet ']'
	2022-01-27T03:20:47.325605833Z  + mkdir -p /sys/fs/cgroup/hugetlb//kubelet
	2022-01-27T03:20:47.326909389Z  + '[' /sys/fs/cgroup/hugetlb == /sys/fs/cgroup/cpuset ']'
	2022-01-27T03:20:47.326935438Z  + mount --bind /sys/fs/cgroup/hugetlb//kubelet /sys/fs/cgroup/hugetlb//kubelet
	2022-01-27T03:20:47.328414143Z  + IFS=
	2022-01-27T03:20:47.328428200Z  + read -r subsystem
	2022-01-27T03:20:47.328707982Z  + return
	2022-01-27T03:20:47.328721092Z  + fix_machine_id
	2022-01-27T03:20:47.328756060Z  + echo 'INFO: clearing and regenerating /etc/machine-id'
	2022-01-27T03:20:47.328761482Z  INFO: clearing and regenerating /etc/machine-id
	2022-01-27T03:20:47.329042679Z  + rm -f /etc/machine-id
	2022-01-27T03:20:47.329789700Z  + systemd-machine-id-setup
	2022-01-27T03:20:47.333141762Z  Initializing machine ID from D-Bus machine ID.
	2022-01-27T03:20:47.341056744Z  + fix_product_name
	2022-01-27T03:20:47.341070468Z  + [[ -f /sys/class/dmi/id/product_name ]]
	2022-01-27T03:20:47.341073956Z  + echo 'INFO: faking /sys/class/dmi/id/product_name to be "kind"'
	2022-01-27T03:20:47.341076750Z  INFO: faking /sys/class/dmi/id/product_name to be "kind"
	2022-01-27T03:20:47.341079373Z  + echo kind
	2022-01-27T03:20:47.341247978Z  + mount -o ro,bind /kind/product_name /sys/class/dmi/id/product_name
	2022-01-27T03:20:47.342604838Z  + fix_product_uuid
	2022-01-27T03:20:47.342617737Z  + [[ ! -f /kind/product_uuid ]]
	2022-01-27T03:20:47.342621833Z  + cat /proc/sys/kernel/random/uuid
	2022-01-27T03:20:47.343665305Z  + [[ -f /sys/class/dmi/id/product_uuid ]]
	2022-01-27T03:20:47.343676408Z  + echo 'INFO: faking /sys/class/dmi/id/product_uuid to be random'
	2022-01-27T03:20:47.343680846Z  INFO: faking /sys/class/dmi/id/product_uuid to be random
	2022-01-27T03:20:47.343732413Z  + mount -o ro,bind /kind/product_uuid /sys/class/dmi/id/product_uuid
	2022-01-27T03:20:47.345008110Z  + [[ -f /sys/devices/virtual/dmi/id/product_uuid ]]
	2022-01-27T03:20:47.345020986Z  + echo 'INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well'
	2022-01-27T03:20:47.345025402Z  INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well
	2022-01-27T03:20:47.345029533Z  + mount -o ro,bind /kind/product_uuid /sys/devices/virtual/dmi/id/product_uuid
	2022-01-27T03:20:47.346325808Z  + select_iptables
	2022-01-27T03:20:47.346338846Z  + local mode=nft
	2022-01-27T03:20:47.347224711Z  ++ grep '^-'
	2022-01-27T03:20:47.347368118Z  ++ wc -l
	2022-01-27T03:20:47.350753972Z  + num_legacy_lines=6
	2022-01-27T03:20:47.350772021Z  + '[' 6 -ge 10 ']'
	2022-01-27T03:20:47.351613288Z  ++ grep '^-'
	2022-01-27T03:20:47.351757538Z  ++ wc -l
	2022-01-27T03:20:47.355375439Z  ++ true
	2022-01-27T03:20:47.355675776Z  + num_nft_lines=0
	2022-01-27T03:20:47.355690668Z  + '[' 6 -ge 0 ']'
	2022-01-27T03:20:47.355705077Z  + mode=legacy
	2022-01-27T03:20:47.355709371Z  + echo 'INFO: setting iptables to detected mode: legacy'
	2022-01-27T03:20:47.355713084Z  INFO: setting iptables to detected mode: legacy
	2022-01-27T03:20:47.355716712Z  + update-alternatives --set iptables /usr/sbin/iptables-legacy
	2022-01-27T03:20:47.355788541Z  + echo 'retryable update-alternatives: --set iptables /usr/sbin/iptables-legacy'
	2022-01-27T03:20:47.355800364Z  + local 'args=--set iptables /usr/sbin/iptables-legacy'
	2022-01-27T03:20:47.356173628Z  ++ seq 0 15
	2022-01-27T03:20:47.356759908Z  + for i in $(seq 0 15)
	2022-01-27T03:20:47.356774650Z  + /usr/bin/update-alternatives --set iptables /usr/sbin/iptables-legacy
	2022-01-27T03:20:47.360598194Z  + return
	2022-01-27T03:20:47.360612437Z  + update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
	2022-01-27T03:20:47.360674051Z  + echo 'retryable update-alternatives: --set ip6tables /usr/sbin/ip6tables-legacy'
	2022-01-27T03:20:47.360687182Z  + local 'args=--set ip6tables /usr/sbin/ip6tables-legacy'
	2022-01-27T03:20:47.361051614Z  ++ seq 0 15
	2022-01-27T03:20:47.361663451Z  + for i in $(seq 0 15)
	2022-01-27T03:20:47.361670272Z  + /usr/bin/update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
	2022-01-27T03:20:47.364034162Z  + return
	2022-01-27T03:20:47.364048370Z  + enable_network_magic
	2022-01-27T03:20:47.364101604Z  + local docker_embedded_dns_ip=127.0.0.11
	2022-01-27T03:20:47.364109069Z  + local docker_host_ip
	2022-01-27T03:20:47.365137633Z  ++ cut '-d ' -f1
	2022-01-27T03:20:47.365227796Z  ++ head -n1 /dev/fd/63
	2022-01-27T03:20:47.365322946Z  +++ getent ahostsv4 host.docker.internal
	2022-01-27T03:20:47.378644084Z  + docker_host_ip=
	2022-01-27T03:20:47.378659802Z  + [[ -z '' ]]
	2022-01-27T03:20:47.379262405Z  ++ ip -4 route show default
	2022-01-27T03:20:47.379413964Z  ++ cut '-d ' -f3
	2022-01-27T03:20:47.381168121Z  + docker_host_ip=192.168.76.1
	2022-01-27T03:20:47.381180176Z  + iptables-save
	2022-01-27T03:20:47.381374449Z  + iptables-restore
	2022-01-27T03:20:47.382626305Z  + sed -e 's/-d 127.0.0.11/-d 192.168.76.1/g' -e 's/-A OUTPUT \(.*\) -j DOCKER_OUTPUT/\0\n-A PREROUTING \1 -j DOCKER_OUTPUT/' -e 's/--to-source :53/--to-source 192.168.76.1:53/g'
	2022-01-27T03:20:47.385354998Z  + cp /etc/resolv.conf /etc/resolv.conf.original
	2022-01-27T03:20:47.386587984Z  + sed -e s/127.0.0.11/192.168.76.1/g /etc/resolv.conf.original
	2022-01-27T03:20:47.388760866Z  ++ cut '-d ' -f1
	2022-01-27T03:20:47.388826197Z  ++ head -n1 /dev/fd/63
	2022-01-27T03:20:47.389314600Z  ++++ hostname
	2022-01-27T03:20:47.390061987Z  +++ getent ahostsv4 old-k8s-version-20220127031714-6703
	2022-01-27T03:20:47.391811311Z  + curr_ipv4=192.168.76.2
	2022-01-27T03:20:47.391823932Z  + echo 'INFO: Detected IPv4 address: 192.168.76.2'
	2022-01-27T03:20:47.391827881Z  INFO: Detected IPv4 address: 192.168.76.2
	2022-01-27T03:20:47.391831551Z  + '[' -f /kind/old-ipv4 ']'
	2022-01-27T03:20:47.391871387Z  + [[ -n 192.168.76.2 ]]
	2022-01-27T03:20:47.391896298Z  + echo -n 192.168.76.2
	2022-01-27T03:20:47.392941547Z  ++ cut '-d ' -f1
	2022-01-27T03:20:47.392993273Z  ++ head -n1 /dev/fd/63
	2022-01-27T03:20:47.393459982Z  ++++ hostname
	2022-01-27T03:20:47.394146601Z  +++ getent ahostsv6 old-k8s-version-20220127031714-6703
	2022-01-27T03:20:47.395592698Z  + curr_ipv6=
	2022-01-27T03:20:47.395606879Z  + echo 'INFO: Detected IPv6 address: '
	2022-01-27T03:20:47.395611261Z  INFO: Detected IPv6 address: 
	2022-01-27T03:20:47.395615529Z  + '[' -f /kind/old-ipv6 ']'
	2022-01-27T03:20:47.395619051Z  + [[ -n '' ]]
	2022-01-27T03:20:47.396029844Z  ++ uname -a
	2022-01-27T03:20:47.396616441Z  + echo 'entrypoint completed: Linux old-k8s-version-20220127031714-6703 5.11.0-1028-gcp #32~20.04.1-Ubuntu SMP Wed Jan 12 20:08:27 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux'
	2022-01-27T03:20:47.396628884Z  entrypoint completed: Linux old-k8s-version-20220127031714-6703 5.11.0-1028-gcp #32~20.04.1-Ubuntu SMP Wed Jan 12 20:08:27 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	2022-01-27T03:20:47.396633258Z  + exec /sbin/init
	2022-01-27T03:20:47.402732713Z  systemd 245.4-4ubuntu3.13 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid)
	2022-01-27T03:20:47.402753531Z  Detected virtualization docker.
	2022-01-27T03:20:47.402756770Z  Detected architecture x86-64.
	2022-01-27T03:20:47.403069664Z  
	2022-01-27T03:20:47.403083447Z  Welcome to Ubuntu 20.04.2 LTS!
	2022-01-27T03:20:47.403088210Z  
	2022-01-27T03:20:47.403099715Z  Set hostname to <old-k8s-version-20220127031714-6703>.
	2022-01-27T03:20:47.443047709Z  [  OK  ] Started Dispatch Password …ts to Console Directory Watch.
	2022-01-27T03:20:47.443200554Z  [  OK  ] Set up automount Arbitrary…s File System Automount Point.
	2022-01-27T03:20:47.443211734Z  [  OK  ] Reached target Local Encrypted Volumes.
	2022-01-27T03:20:47.443331683Z  [  OK  ] Reached target Network is Online.
	2022-01-27T03:20:47.443359517Z  [  OK  ] Reached target Paths.
	2022-01-27T03:20:47.443365370Z  [  OK  ] Reached target Slices.
	2022-01-27T03:20:47.443384888Z  [  OK  ] Reached target Swap.
	2022-01-27T03:20:47.443589802Z  [  OK  ] Listening on Journal Audit Socket.
	2022-01-27T03:20:47.443687901Z  [  OK  ] Listening on Journal Socket (/dev/log).
	2022-01-27T03:20:47.443885779Z  [  OK  ] Listening on Journal Socket.
	2022-01-27T03:20:47.445256615Z           Mounting Huge Pages File System...
	2022-01-27T03:20:47.446595182Z           Mounting Kernel Debug File System...
	2022-01-27T03:20:47.447959607Z           Mounting Kernel Trace File System...
	2022-01-27T03:20:47.449593893Z           Starting Journal Service...
	2022-01-27T03:20:47.450892583Z           Starting Create list of st…odes for the current kernel...
	2022-01-27T03:20:47.453658170Z           Mounting FUSE Control File System...
	2022-01-27T03:20:47.454086040Z           Starting Remount Root and Kernel File Systems...
	2022-01-27T03:20:47.455615606Z           Starting Apply Kernel Variables...
	2022-01-27T03:20:47.457385657Z  [  OK  ] Mounted Huge Pages File System.
	2022-01-27T03:20:47.457526517Z  [  OK  ] Mounted Kernel Debug File System.
	2022-01-27T03:20:47.457659596Z  [  OK  ] Mounted Kernel Trace File System.
	2022-01-27T03:20:47.458363685Z  [  OK  ] Finished Create list of st… nodes for the current kernel.
	2022-01-27T03:20:47.458615255Z  [  OK  ] Mounted FUSE Control File System.
	2022-01-27T03:20:47.459296153Z  [  OK  ] Finished Remount Root and Kernel File Systems.
	2022-01-27T03:20:47.460913233Z           Starting Create System Users...
	2022-01-27T03:20:47.462326356Z           Starting Update UTMP about System Boot/Shutdown...
	2022-01-27T03:20:47.463238748Z  [  OK  ] Finished Apply Kernel Variables.
	2022-01-27T03:20:47.468868438Z  [  OK  ] Finished Update UTMP about System Boot/Shutdown.
	2022-01-27T03:20:47.484458728Z  [  OK  ] Finished Create System Users.
	2022-01-27T03:20:47.484484145Z           Starting Create Static Device Nodes in /dev...
	2022-01-27T03:20:47.484554140Z  [  OK  ] Started Journal Service.
	2022-01-27T03:20:47.486957644Z           Starting Flush Journal to Persistent Storage...
	2022-01-27T03:20:47.492010442Z  [  OK  ] Finished Flush Journal to Persistent Storage.
	2022-01-27T03:20:47.492764333Z  [  OK  ] Finished Create Static Device Nodes in /dev.
	2022-01-27T03:20:47.492920578Z  [  OK  ] Reached target Local File Systems (Pre).
	2022-01-27T03:20:47.492939985Z  [  OK  ] Reached target Local File Systems.
	2022-01-27T03:20:47.493175471Z  [  OK  ] Reached target System Initialization.
	2022-01-27T03:20:47.493190618Z  [  OK  ] Started Daily Cleanup of Temporary Directories.
	2022-01-27T03:20:47.493219232Z  [  OK  ] Reached target Timers.
	2022-01-27T03:20:47.493368945Z  [  OK  ] Listening on BuildKit.
	2022-01-27T03:20:47.493479589Z  [  OK  ] Listening on D-Bus System Message Bus Socket.
	2022-01-27T03:20:47.494635433Z           Starting Docker Socket for the API.
	2022-01-27T03:20:47.504876585Z           Starting Podman API Socket.
	2022-01-27T03:20:47.506370893Z  [  OK  ] Listening on Docker Socket for the API.
	2022-01-27T03:20:47.506437039Z  [  OK  ] Listening on Podman API Socket.
	2022-01-27T03:20:47.506488816Z  [  OK  ] Reached target Sockets.
	2022-01-27T03:20:47.506572778Z  [  OK  ] Reached target Basic System.
	2022-01-27T03:20:47.507768967Z           Starting containerd container runtime...
	2022-01-27T03:20:47.509330271Z  [  OK  ] Started D-Bus System Message Bus.
	2022-01-27T03:20:47.512665640Z           Starting minikube automount...
	2022-01-27T03:20:47.514091163Z           Starting OpenBSD Secure Shell server...
	2022-01-27T03:20:47.528967592Z  [  OK  ] Finished minikube automount.
	2022-01-27T03:20:47.559382637Z  [  OK  ] Started OpenBSD Secure Shell server.
	2022-01-27T03:20:47.570200838Z  [  OK  ] Started containerd container runtime.
	2022-01-27T03:20:47.571364894Z           Starting Docker Application Container Engine...
	2022-01-27T03:20:47.818074788Z  [  OK  ] Started Docker Application Container Engine.
	2022-01-27T03:20:47.818224760Z  [  OK  ] Reached target Multi-User System.
	2022-01-27T03:20:47.818252499Z  [  OK  ] Reached target Graphical Interface.
	2022-01-27T03:20:47.820245627Z           Starting Update UTMP about System Runlevel Changes...
	2022-01-27T03:20:47.827923748Z  [  OK  ] Finished Update UTMP about System Runlevel Changes.
	2022-01-27T03:23:02.194811082Z  [  OK  ] Stopped target Graphical Interface.
	2022-01-27T03:23:02.194842988Z  [  OK  ] Stopped target Multi-User System.
	2022-01-27T03:23:02.194847838Z  [  OK  ] Stopped target Timers.
	2022-01-27T03:23:02.194853595Z  [  OK  ] Stopped Daily Cleanup of Temporary Directories.
	2022-01-27T03:23:02.194857349Z           Stopping D-Bus System Message Bus...
	2022-01-27T03:23:02.194861148Z           Stopping Docker Application Container Engine...
	2022-01-27T03:23:02.194865006Z           Stopping kubelet: The Kubernetes Node Agent...
	2022-01-27T03:23:02.195355260Z           Stopping OpenBSD Secure Shell server...
	2022-01-27T03:23:02.198576065Z  [  OK  ] Stopped D-Bus System Message Bus.
	2022-01-27T03:23:02.198595425Z  [  OK  ] Stopped OpenBSD Secure Shell server.
	2022-01-27T03:23:02.204697231Z  [  OK  ] Stopped Docker Application Container Engine.
	2022-01-27T03:23:02.204986944Z  [  OK  ] Stopped target Network is Online.
	2022-01-27T03:23:02.206431061Z           Stopping containerd container runtime...
	2022-01-27T03:23:02.206448978Z  [  OK  ] Stopped minikube automount.
	2022-01-27T03:23:02.216784115Z  [  OK  ] Stopped containerd container runtime.
	2022-01-27T03:23:02.264325242Z  [  OK  ] Stopped kubelet: The Kubernetes Node Agent.
	2022-01-27T03:23:02.264704680Z  [  OK  ] Stopped target Basic System.
	2022-01-27T03:23:02.264723885Z  [  OK  ] Stopped target Paths.
	2022-01-27T03:23:02.264729048Z  [  OK  ] Stopped target Slices.
	2022-01-27T03:23:02.264733528Z  [  OK  ] Stopped target Sockets.
	2022-01-27T03:23:02.288345511Z  [  OK  ] Closed BuildKit.
	2022-01-27T03:23:02.288956433Z  [  OK  ] Closed D-Bus System Message Bus Socket.
	2022-01-27T03:23:02.289708240Z  [  OK  ] Closed Docker Socket for the API.
	2022-01-27T03:23:02.290457967Z  [  OK  ] Closed Podman API Socket.
	2022-01-27T03:23:02.290476761Z  [  OK  ] Stopped target System Initialization.
	2022-01-27T03:23:02.290493320Z  [  OK  ] Stopped target Local Encrypted Volumes.
	2022-01-27T03:23:02.307436732Z  [  OK  ] Stopped Dispatch Password …ts to Console Directory Watch.
	2022-01-27T03:23:02.307457754Z  [  OK  ] Stopped target Local File Systems.
	2022-01-27T03:23:02.309014431Z           Unmounting /data...
	2022-01-27T03:23:02.309829255Z           Unmounting /etc/hostname...
	2022-01-27T03:23:02.310035643Z           Unmounting /etc/hosts...
	2022-01-27T03:23:02.311234762Z           Unmounting /etc/resolv.conf...
	2022-01-27T03:23:02.312449535Z           Unmounting /kind/product_uuid...
	2022-01-27T03:23:02.313812744Z           Unmounting /run/containerd…333eb8f1abddaa8d37b4667/shm...
	2022-01-27T03:23:02.315416772Z           Unmounting /run/containerd…b382f20f17ed1e7d2f76d68/shm...
	2022-01-27T03:23:02.319840048Z           Unmounting /run/containerd…75b219f31bc9edee584b704/shm...
	2022-01-27T03:23:02.320264630Z           Unmounting /run/containerd…39840f454273bb78191fc32/shm...
	2022-01-27T03:23:02.321714746Z           Unmounting /run/containerd…dc402c6ed887247644feb27/shm...
	2022-01-27T03:23:02.323246109Z           Unmounting /run/containerd…3f5929d8fa8ca68e230481e/shm...
	2022-01-27T03:23:02.324919681Z           Unmounting /run/containerd…de332074adfb50337f4a368/shm...
	2022-01-27T03:23:02.326499973Z           Unmounting /run/containerd…8360c732a8ec4aa05e5e56b/shm...
	2022-01-27T03:23:02.327899954Z           Unmounting /run/containerd…bfc3835e27f59803443078d/shm...
	2022-01-27T03:23:02.329231764Z           Unmounting /run/containerd…87635b19be3d68c920eb4de/shm...
	2022-01-27T03:23:02.330707419Z           Unmounting /run/containerd…977691262457ab45bbbc/rootfs...
	2022-01-27T03:23:02.333533545Z           Unmounting /run/containerd…eb8f1abddaa8d37b4667/rootfs...
	2022-01-27T03:23:02.333930464Z           Unmounting /run/containerd…566b44267b54ef7194f4/rootfs...
	2022-01-27T03:23:02.335481104Z           Unmounting /run/containerd…2f20f17ed1e7d2f76d68/rootfs...
	2022-01-27T03:23:02.337047649Z           Unmounting /run/containerd…219f31bc9edee584b704/rootfs...
	2022-01-27T03:23:02.341823498Z           Unmounting /run/containerd…5da2d89660bd8e82913e/rootfs...
	2022-01-27T03:23:02.344577699Z           Unmounting /run/containerd…9f25fbbc2d6f17e2f240/rootfs...
	2022-01-27T03:23:02.345132248Z           Unmounting /run/containerd…474c6d3e87f6eec23012/rootfs...
	2022-01-27T03:23:02.346601697Z           Unmounting /run/containerd…40f454273bb78191fc32/rootfs...
	2022-01-27T03:23:02.349569546Z           Unmounting /run/containerd…02c6ed887247644feb27/rootfs...
	2022-01-27T03:23:02.350985168Z           Unmounting /run/containerd…929d8fa8ca68e230481e/rootfs...
	2022-01-27T03:23:02.352325791Z           Unmounting /run/containerd…d87c0b33934d4b4b123c/rootfs...
	2022-01-27T03:23:02.354293693Z           Unmounting /run/containerd…6ba4d3864c9d7ae64eef/rootfs...
	2022-01-27T03:23:02.357096190Z           Unmounting /run/containerd…32074adfb50337f4a368/rootfs...
	2022-01-27T03:23:02.358719716Z           Unmounting /run/containerd…f1be103a2e440cafee66/rootfs...
	2022-01-27T03:23:02.358994660Z           Unmounting /run/containerd…0c732a8ec4aa05e5e56b/rootfs...
	2022-01-27T03:23:02.361457541Z           Unmounting /run/containerd…3835e27f59803443078d/rootfs...
	2022-01-27T03:23:02.362077439Z           Unmounting /run/containerd…35b19be3d68c920eb4de/rootfs...
	2022-01-27T03:23:02.363657459Z           Unmounting /run/containerd…27dcd99bfc253fc52444/rootfs...
	2022-01-27T03:23:02.366007247Z           Unmounting /run/netns/cni-…4a10-0868-3cef-9b80a197117b...
	2022-01-27T03:23:02.367507239Z           Unmounting /run/netns/cni-…a093-5e1e-df89-09f5cdd14b38...
	2022-01-27T03:23:02.368529105Z           Unmounting /run/netns/cni-…f584-82a2-7936-e1086ddd4a4a...
	2022-01-27T03:23:02.369138738Z           Unmounting /tmp/hostpath-provisioner...
	2022-01-27T03:23:02.370387568Z           Unmounting /tmp/hostpath_pv...
	2022-01-27T03:23:02.371613699Z           Unmounting /usr/lib/modules...
	2022-01-27T03:23:02.405124247Z           Unmounting /var/lib/kubele…~secret/default-token-rgcg4...
	2022-01-27T03:23:02.406792162Z           Unmounting /var/lib/kubele…~secret/coredns-token-6zthz...
	2022-01-27T03:23:02.408559841Z           Unmounting /var/lib/kubele…cret/kube-proxy-token-p2864...
	2022-01-27T03:23:02.410039918Z           Unmounting /var/lib/kubele…~secret/kindnet-token-lxwm4...
	2022-01-27T03:23:02.411469128Z           Unmounting /var/lib/kubele…age-provisioner-token-2xkxn...
	2022-01-27T03:23:02.412974994Z           Unmounting /var/lib/kubele…/metrics-server-token-hrtz7...
	2022-01-27T03:23:02.413785907Z  [  OK  ] Stopped Apply Kernel Variables.
	2022-01-27T03:23:02.414537824Z           Stopping Update UTMP about System Boot/Shutdown...
	2022-01-27T03:23:02.420484303Z  [  OK  ] Unmounted /data.
	2022-01-27T03:23:02.421170819Z  [  OK  ] Unmounted /etc/hostname.
	2022-01-27T03:23:02.421923617Z  [  OK  ] Unmounted /etc/hosts.
	2022-01-27T03:23:02.422593655Z  [  OK  ] Unmounted /etc/resolv.conf.
	2022-01-27T03:23:02.423382972Z  [  OK  ] Unmounted /kind/product_uuid.
	2022-01-27T03:23:02.424057698Z  [  OK  ] Unmounted /run/containerd/…2d333eb8f1abddaa8d37b4667/shm.
	2022-01-27T03:23:02.424801286Z  [  OK  ] Unmounted /run/containerd/…19b382f20f17ed1e7d2f76d68/shm.
	2022-01-27T03:23:02.425479253Z  [  OK  ] Unmounted /run/containerd/…2275b219f31bc9edee584b704/shm.
	2022-01-27T03:23:02.426888904Z  [  OK  ] Unmounted /run/containerd/…8739840f454273bb78191fc32/shm.
	2022-01-27T03:23:02.426922023Z  [  OK  ] Unmounted /run/containerd/…7adc402c6ed887247644feb27/shm.
	2022-01-27T03:23:02.428027451Z  [  OK  ] Unmounted /run/containerd/…173f5929d8fa8ca68e230481e/shm.
	2022-01-27T03:23:02.429020159Z  [  OK  ] Unmounted /run/containerd/…d4de332074adfb50337f4a368/shm.
	2022-01-27T03:23:02.429921142Z  [  OK  ] Unmounted /run/containerd/…518360c732a8ec4aa05e5e56b/shm.
	2022-01-27T03:23:02.430634884Z  [  OK  ] Unmounted /run/containerd/…77bfc3835e27f59803443078d/shm.
	2022-01-27T03:23:02.431348264Z  [  OK  ] Unmounted /run/containerd/…9b87635b19be3d68c920eb4de/shm.
	2022-01-27T03:23:02.432001364Z  [  OK  ] Unmounted /run/containerd/…3f977691262457ab45bbbc/rootfs.
	2022-01-27T03:23:02.432692850Z  [  OK  ] Unmounted /run/containerd/…33eb8f1abddaa8d37b4667/rootfs.
	2022-01-27T03:23:02.433323868Z  [  OK  ] Unmounted /run/containerd/…e4566b44267b54ef7194f4/rootfs.
	2022-01-27T03:23:02.433950525Z  [  OK  ] Unmounted /run/containerd/…382f20f17ed1e7d2f76d68/rootfs.
	2022-01-27T03:23:02.434687946Z  [  OK  ] Unmounted /run/containerd/…5b219f31bc9edee584b704/rootfs.
	2022-01-27T03:23:02.435377009Z  [  OK  ] Unmounted /run/containerd/…e55da2d89660bd8e82913e/rootfs.
	2022-01-27T03:23:02.436104323Z  [  OK  ] Unmounted /run/containerd/…4d9f25fbbc2d6f17e2f240/rootfs.
	2022-01-27T03:23:02.437021940Z  [  OK  ] Unmounted /run/containerd/…7d474c6d3e87f6eec23012/rootfs.
	2022-01-27T03:23:02.437815741Z  [  OK  ] Unmounted /run/containerd/…9840f454273bb78191fc32/rootfs.
	2022-01-27T03:23:02.438643995Z  [  OK  ] Unmounted /run/containerd/…c402c6ed887247644feb27/rootfs.
	2022-01-27T03:23:02.439332380Z  [  OK  ] Unmounted /run/containerd/…f5929d8fa8ca68e230481e/rootfs.
	2022-01-27T03:23:02.440056252Z  [  OK  ] Unmounted /run/containerd/…6bd87c0b33934d4b4b123c/rootfs.
	2022-01-27T03:23:02.440749778Z  [  OK  ] Unmounted /run/containerd/…b36ba4d3864c9d7ae64eef/rootfs.
	2022-01-27T03:23:02.441495514Z  [  OK  ] Unmounted /run/containerd/…e332074adfb50337f4a368/rootfs.
	2022-01-27T03:23:02.442175134Z  [  OK  ] Unmounted /run/containerd/…1af1be103a2e440cafee66/rootfs.
	2022-01-27T03:23:02.442870762Z  [  OK  ] Unmounted /run/containerd/…360c732a8ec4aa05e5e56b/rootfs.
	2022-01-27T03:23:02.444332461Z  [  OK  ] Unmounted /run/containerd/…fc3835e27f59803443078d/rootfs.
	2022-01-27T03:23:02.445016691Z  [  OK  ] Unmounted /run/containerd/…7635b19be3d68c920eb4de/rootfs.
	2022-01-27T03:23:02.445803319Z  [  OK  ] Unmounted /run/containerd/…0727dcd99bfc253fc52444/rootfs.
	2022-01-27T03:23:02.446673394Z  [  OK  ] Unmounted /run/netns/cni-4…c-4a10-0868-3cef-9b80a197117b.
	2022-01-27T03:23:02.447445728Z  [  OK  ] Unmounted /run/netns/cni-8…9-a093-5e1e-df89-09f5cdd14b38.
	2022-01-27T03:23:02.448159741Z  [  OK  ] Unmounted /run/netns/cni-b…c-f584-82a2-7936-e1086ddd4a4a.
	2022-01-27T03:23:02.448771024Z  [  OK  ] Unmounted /tmp/hostpath-provisioner.
	2022-01-27T03:23:02.449386207Z  [  OK  ] Unmounted /tmp/hostpath_pv.
	2022-01-27T03:23:02.449946337Z  [  OK  ] Unmounted /usr/lib/modules.
	2022-01-27T03:23:02.450531280Z  [  OK  ] Unmounted /var/lib/kubelet…io~secret/default-token-rgcg4.
	2022-01-27T03:23:02.451200193Z  [  OK  ] Unmounted /var/lib/kubelet…io~secret/coredns-token-6zthz.
	2022-01-27T03:23:02.454852331Z  [  OK  ] Unmounted /var/lib/kubelet…secret/kube-proxy-token-p2864.
	2022-01-27T03:23:02.455560223Z  [  OK  ] Unmounted /var/lib/kubelet…io~secret/kindnet-token-lxwm4.
	2022-01-27T03:23:02.456196000Z  [  OK  ] Unmounted /var/lib/kubelet…orage-provisioner-token-2xkxn.
	2022-01-27T03:23:02.456822603Z  [  OK  ] Unmounted /var/lib/kubelet…et/metrics-server-token-hrtz7.
	2022-01-27T03:23:02.459763314Z           Unmounting /tmp...
	2022-01-27T03:23:02.460858579Z  [  OK  ] Stopped Update UTMP about System Boot/Shutdown.
	2022-01-27T03:23:02.462334666Z           Unmounting /var...
	2022-01-27T03:23:02.464297200Z  [  OK  ] Unmounted /tmp.
	2022-01-27T03:23:02.464398759Z  [  OK  ] Stopped target Swap.
	2022-01-27T03:23:02.467486577Z  [  OK  ] Unmounted /var.
	2022-01-27T03:23:02.467575713Z  [  OK  ] Stopped target Local File Systems (Pre).
	2022-01-27T03:23:02.467624841Z  [  OK  ] Reached target Unmount All Filesystems.
	2022-01-27T03:23:02.469178986Z  [  OK  ] Stopped Create Static Device Nodes in /dev.
	2022-01-27T03:23:02.470361448Z  [  OK  ] Stopped Create System Users.
	2022-01-27T03:23:02.470890463Z  [  OK  ] Stopped Remount Root and Kernel File Systems.
	2022-01-27T03:23:02.470904208Z  [  OK  ] Reached target Shutdown.
	2022-01-27T03:23:02.470908598Z  [  OK  ] Reached target Final Step.
	2022-01-27T03:23:02.471133196Z  [  OK  ] Finished Power-Off.
	2022-01-27T03:23:02.471143916Z  [  OK  ] Reached target Power-Off.
	
	-- /stdout --
	I0127 03:23:23.397141  234652 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0127 03:23:23.490692  234652 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-01-27 03:23:23.428927315 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1028-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668890624 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0127 03:23:23.490767  234652 errors.go:98] postmortem docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-01-27 03:23:23.428927315 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1028-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668890624 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0127 03:23:23.490856  234652 network_create.go:254] running [docker network inspect old-k8s-version-20220127031714-6703] to gather additional debugging logs...
	I0127 03:23:23.490880  234652 cli_runner.go:133] Run: docker network inspect old-k8s-version-20220127031714-6703
	W0127 03:23:23.522615  234652 cli_runner.go:180] docker network inspect old-k8s-version-20220127031714-6703 returned with exit code 1
	I0127 03:23:23.522653  234652 network_create.go:257] error running [docker network inspect old-k8s-version-20220127031714-6703]: docker network inspect old-k8s-version-20220127031714-6703: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220127031714-6703
	I0127 03:23:23.522670  234652 network_create.go:259] output of [docker network inspect old-k8s-version-20220127031714-6703]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220127031714-6703
	
	** /stderr **
	I0127 03:23:23.522806  234652 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0127 03:23:23.617112  234652 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:45 SystemTime:2022-01-27 03:23:23.553939929 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1028-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668890624 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0127 03:23:23.617493  234652 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220127031714-6703
	I0127 03:23:23.650737  234652 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/old-k8s-version-20220127031714-6703/config.json ...
	I0127 03:23:23.650938  234652 machine.go:88] provisioning docker machine ...
	I0127 03:23:23.650961  234652 ubuntu.go:169] provisioning hostname "old-k8s-version-20220127031714-6703"
	I0127 03:23:23.650992  234652 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220127031714-6703
	W0127 03:23:23.684788  234652 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220127031714-6703 returned with exit code 1
	I0127 03:23:23.684856  234652 machine.go:91] provisioned docker machine in 33.905079ms
	I0127 03:23:23.684900  234652 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 03:23:23.684934  234652 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220127031714-6703
	W0127 03:23:23.718981  234652 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220127031714-6703 returned with exit code 1
	I0127 03:23:23.719091  234652 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0127 03:23:23.995563  234652 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220127031714-6703
	W0127 03:23:24.034757  234652 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220127031714-6703 returned with exit code 1
	I0127 03:23:24.034879  234652 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0127 03:23:24.575243  234652 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220127031714-6703
	W0127 03:23:24.610885  234652 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220127031714-6703 returned with exit code 1
	I0127 03:23:24.610978  234652 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0127 03:23:25.266790  234652 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220127031714-6703
	W0127 03:23:25.301409  234652 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220127031714-6703 returned with exit code 1
	W0127 03:23:25.301504  234652 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0127 03:23:25.301520  234652 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0127 03:23:25.301534  234652 fix.go:57] fixHost completed within 2.078256839s
	I0127 03:23:25.301543  234652 start.go:80] releasing machines lock for "old-k8s-version-20220127031714-6703", held for 2.078295299s
	W0127 03:23:25.301573  234652 start.go:570] error starting host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0127 03:23:25.301705  234652 out.go:241] ! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0127 03:23:25.301724  234652 start.go:585] Will try again in 5 seconds ...
	I0127 03:23:30.303449  234652 start.go:313] acquiring machines lock for old-k8s-version-20220127031714-6703: {Name:mk9f3d8148e082b2bd0283e0aa320c97d8b55cef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 03:23:30.303566  234652 start.go:317] acquired machines lock for "old-k8s-version-20220127031714-6703" in 85.837µs
	I0127 03:23:30.303591  234652 start.go:93] Skipping create...Using existing machine configuration
	I0127 03:23:30.303598  234652 fix.go:55] fixHost starting: 
	I0127 03:23:30.303829  234652 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220127031714-6703 --format={{.State.Status}}
	I0127 03:23:30.339047  234652 fix.go:108] recreateIfNeeded on old-k8s-version-20220127031714-6703: state=Stopped err=<nil>
	W0127 03:23:30.339071  234652 fix.go:134] unexpected machine state, will restart: <nil>
	I0127 03:23:30.579576  234652 out.go:176] * Restarting existing docker container for "old-k8s-version-20220127031714-6703" ...
	I0127 03:23:30.579664  234652 cli_runner.go:133] Run: docker start old-k8s-version-20220127031714-6703
	W0127 03:23:31.416906  234652 cli_runner.go:180] docker start old-k8s-version-20220127031714-6703 returned with exit code 1
	I0127 03:23:31.416987  234652 cli_runner.go:133] Run: docker inspect old-k8s-version-20220127031714-6703
	I0127 03:23:31.461989  234652 errors.go:84] Postmortem inspect ("docker inspect old-k8s-version-20220127031714-6703"): -- stdout --
	[
	    {
	        "Id": "91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73",
	        "Created": "2022-01-27T03:20:46.660393198Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 137,
	            "Error": "network 75758b286fdf1dad635bb8a0f3fdd8dd7a6e676c24baa1342fb095c73ec27f87 not found",
	            "StartedAt": "2022-01-27T03:20:47.166723014Z",
	            "FinishedAt": "2022-01-27T03:23:21.951827406Z"
	        },
	        "Image": "sha256:64d09634c60d2a75596bc705aa84bdc41f76fe47c5d9ee362550bffbdc256979",
	        "ResolvConfPath": "/var/lib/docker/containers/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73/hostname",
	        "HostsPath": "/var/lib/docker/containers/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73/hosts",
	        "LogPath": "/var/lib/docker/containers/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73-json.log",
	        "Name": "/old-k8s-version-20220127031714-6703",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220127031714-6703:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220127031714-6703",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/970e93133d8fce1e1cc56e1ea44d6d0e67fcb8d32198cf7770f3898c71765d9b-init/diff:/var/lib/docker/overlay2/16963715367ada0e39ac0ff75961ba8f4191279cc4ff8b9fd9fd28a2afd586a9/diff:/var/lib/docker/overlay2/cf3bc92115c85438ae77ffc60f5cb639c3968a2c050d4a65b31cd6fccfa41416/diff:/var/lib/docker/overlay2/218e751ff7f51ec2e7fadb35a20c5a8ea67c3b1fd9c070ad8816e2ab30fdd9e4/diff:/var/lib/docker/overlay2/bf9ba576be8949fd04e2902858d2da23ce5cd4ba75e8e532dcf79ee7b42b1908/diff:/var/lib/docker/overlay2/8a96f75d3963c09f4f38ba2dc19523e12f658d2ed81533b22cf9cd9057083153/diff:/var/lib/docker/overlay2/5ee8bc1f5021db1aa75bbf3fb3060fab633d7e9f5a618431b3dc5d8a40fbdd2f/diff:/var/lib/docker/overlay2/c2d261af5836940de0b3aff73ea5e1328576bb1a856847be0e7afced9977e7d6/diff:/var/lib/docker/overlay2/14d653eab7bf5140d2e59be7764608906b78b45fc00c1618ff27987917119e60/diff:/var/lib/docker/overlay2/98223f44c4eceaa2e2d3f5d58e0fedda224657dcca26fb238aa0adec9753395d/diff:/var/lib/docker/overlay2/281afd
7c879ee9ec2ed091125655b6d2471ebfba9d70f828f16d582ccd389d51/diff:/var/lib/docker/overlay2/051e1076d1ab90fd612cf6a1357f63367f2661aaf90bc242f4e0bc93ce78a37b/diff:/var/lib/docker/overlay2/f6a47c5c1b4fde4d27c0d8f0a804be3d88995b705c5683976ac96ab692e5d79c/diff:/var/lib/docker/overlay2/ea4bae1904f2e590771d839dff6c6213846ccc1cc1fbaadf7444396e4dcc2a17/diff:/var/lib/docker/overlay2/789935353e9a43b8382bc8319c2f7f598462bd83a26a3c822d3481f62ff3afe3/diff:/var/lib/docker/overlay2/8ba6338f7d22201db727697f81bb55a260d179c2853681b855cfe641eaaf5f44/diff:/var/lib/docker/overlay2/3afbc0354287f271a5fa34e1ce8c58bac282ecb529fae2e3fc083a2e11b9a108/diff:/var/lib/docker/overlay2/6d064d2ddd73c69a0ca655bd5c632e1fddeddb1ad03016ca650f2b2e18a7f5dd/diff:/var/lib/docker/overlay2/8caadb592a9d04201e0a2fbb7a839bee7e5fea75f250ae661ce62dccc9a6439a/diff:/var/lib/docker/overlay2/7f97926f14f19d98861be9f0b5ea4321e0073fda2b4246addff2babb6782e2cd/diff:/var/lib/docker/overlay2/f97aaf1f58df74aae41d4989c1c2bcbe345d3d495d617cf92c16b7b082951549/diff:/var/lib/d
ocker/overlay2/c94c523d52a95a2bb2b25c99d07f159fd33fc79a55a85dc98e3fac825ce6ef30/diff:/var/lib/docker/overlay2/ac1d2ace06b5b555e3c895bd7786df4f8cee55531f1ea226fbf3c3d5582d3f29/diff:/var/lib/docker/overlay2/309519db7300c42aa8c54419ffa576f100db793a69a4e5fe3109ca02e14b8e50/diff:/var/lib/docker/overlay2/6f64c8673ef7362fcd7d55d0c19c0830a5c31c3841ae4ee69e75cd25a74a5048/diff:/var/lib/docker/overlay2/c08216237e476a12f55f3c1bafafd366c6a53eaf799965fd0ad88d097eedbcbf/diff:/var/lib/docker/overlay2/a870da1a92b7f0cb45baa7d95ac8cbd143a39082f8958524958f0d802a216e62/diff:/var/lib/docker/overlay2/88e7ec21e8af05b1015ba035f3860188f9bffa0e509a67ffa4267d7c7a76e972/diff:/var/lib/docker/overlay2/531e06b694b717bab79b354999b4f4912499b6421997a49bce26d9e82f9a3754/diff:/var/lib/docker/overlay2/3edc6927179b5caaefdff0badec36d6cbe41a8420a77adf24018241e26e6b603/diff:/var/lib/docker/overlay2/4b8a0abd7fe49430e3a1a29c98a3b3e3323c0fadc5b1311e26b612a06f8f76ae/diff:/var/lib/docker/overlay2/d3f5099daf876cc2cdbd27f60cd147c3c916036d04752ada6a17bd13210
f19e2/diff:/var/lib/docker/overlay2/2679ad4fb25b15275ad72b787f695d7e12948cab8b6f4ec2d6a61df2e0fcff7f/diff:/var/lib/docker/overlay2/e628f10038c6dee7c1f2a72d6abe7d1e8af2d38114365290918485e6ac95b313/diff:/var/lib/docker/overlay2/b07fb4ed2e44e92c06fd6500a3a666d60418960b7a1bcb8ebc7a6bb8d06dee11/diff:/var/lib/docker/overlay2/cfa3d2dac7804a585841fcf779597c2da336e152637f929ce76bed23499c23cc/diff:/var/lib/docker/overlay2/dced6c5820d9485d9b1f29a8f74920f7d509f513758a2816be4cb2c4f63bb242/diff:/var/lib/docker/overlay2/d8ecc913e96f9de3f718a3a063e0e5fa9e116c77aff94a32616bdb00bd7aac7f/diff:/var/lib/docker/overlay2/b6ac33633267400f1aa77b5b17c69fc7527f3d0ed4dfbd41fb003c94505ea310/diff:/var/lib/docker/overlay2/bbc9c4b2e5f00714c3142098bcdb97dedd2ca84f7a19455dfeeff55a95beffd4/diff:/var/lib/docker/overlay2/84944472d3651e44a20fd3fa72bed747da3567af8ebf5db89f6200326fa8ac7c/diff:/var/lib/docker/overlay2/a3fbfb702a1204c83d8fcd07aeba21a7139c0f98a642a9049754275c50b0d89b/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/970e93133d8fce1e1cc56e1ea44d6d0e67fcb8d32198cf7770f3898c71765d9b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/970e93133d8fce1e1cc56e1ea44d6d0e67fcb8d32198cf7770f3898c71765d9b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/970e93133d8fce1e1cc56e1ea44d6d0e67fcb8d32198cf7770f3898c71765d9b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220127031714-6703",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220127031714-6703/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220127031714-6703",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220127031714-6703",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220127031714-6703",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a1f42216df58be0f0ab6a0ffcecf614684c3782f227f54c0afcbc1fb8c8902e4",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/a1f42216df58",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220127031714-6703": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "91f0a9797496",
	                        "old-k8s-version-20220127031714-6703"
	                    ],
	                    "NetworkID": "75758b286fdf1dad635bb8a0f3fdd8dd7a6e676c24baa1342fb095c73ec27f87",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]
	
	-- /stdout --
	I0127 03:23:31.462082  234652 cli_runner.go:133] Run: docker logs --timestamps --details old-k8s-version-20220127031714-6703
	I0127 03:23:31.513992  234652 errors.go:91] Postmortem logs ("docker logs --timestamps --details old-k8s-version-20220127031714-6703"): -- stdout --
	2022-01-27T03:20:47.166845292Z  + userns=
	2022-01-27T03:20:47.166883146Z  + grep -Eqv '0[[:space:]]+0[[:space:]]+4294967295' /proc/self/uid_map
	2022-01-27T03:20:47.169409854Z  + validate_userns
	2022-01-27T03:20:47.170823893Z  + [[ -z '' ]]
	2022-01-27T03:20:47.170835684Z  + return
	2022-01-27T03:20:47.170839958Z  + configure_containerd
	2022-01-27T03:20:47.170850373Z  ++ stat -f -c %T /kind
	2022-01-27T03:20:47.170866395Z  + [[ overlayfs == \z\f\s ]]
	2022-01-27T03:20:47.170871613Z  + configure_proxy
	2022-01-27T03:20:47.170875658Z  + mkdir -p /etc/systemd/system.conf.d/
	2022-01-27T03:20:47.172134379Z  + [[ ! -z '' ]]
	2022-01-27T03:20:47.172163426Z  + cat
	2022-01-27T03:20:47.173224503Z  + fix_kmsg
	2022-01-27T03:20:47.173236899Z  + [[ ! -e /dev/kmsg ]]
	2022-01-27T03:20:47.173239971Z  + fix_mount
	2022-01-27T03:20:47.173242582Z  + echo 'INFO: ensuring we can execute mount/umount even with userns-remap'
	2022-01-27T03:20:47.173245651Z  INFO: ensuring we can execute mount/umount even with userns-remap
	2022-01-27T03:20:47.173588664Z  ++ which mount
	2022-01-27T03:20:47.174746219Z  ++ which umount
	2022-01-27T03:20:47.175526186Z  + chown root:root /usr/bin/mount /usr/bin/umount
	2022-01-27T03:20:47.181775358Z  ++ which mount
	2022-01-27T03:20:47.183136898Z  ++ which umount
	2022-01-27T03:20:47.183661909Z  + chmod -s /usr/bin/mount /usr/bin/umount
	2022-01-27T03:20:47.185066971Z  +++ which mount
	2022-01-27T03:20:47.185807748Z  ++ stat -f -c %T /usr/bin/mount
	2022-01-27T03:20:47.186686911Z  + [[ overlayfs == \a\u\f\s ]]
	2022-01-27T03:20:47.186702171Z  + [[ -z '' ]]
	2022-01-27T03:20:47.186706541Z  + echo 'INFO: remounting /sys read-only'
	2022-01-27T03:20:47.186709932Z  INFO: remounting /sys read-only
	2022-01-27T03:20:47.186713110Z  + mount -o remount,ro /sys
	2022-01-27T03:20:47.188442209Z  + echo 'INFO: making mounts shared'
	2022-01-27T03:20:47.188457315Z  INFO: making mounts shared
	2022-01-27T03:20:47.188461681Z  + mount --make-rshared /
	2022-01-27T03:20:47.189625028Z  + retryable_fix_cgroup
	2022-01-27T03:20:47.189915785Z  ++ seq 0 10
	2022-01-27T03:20:47.190531866Z  + for i in $(seq 0 10)
	2022-01-27T03:20:47.190545483Z  + fix_cgroup
	2022-01-27T03:20:47.190549299Z  + [[ -f /sys/fs/cgroup/cgroup.controllers ]]
	2022-01-27T03:20:47.190553206Z  + echo 'INFO: detected cgroup v1'
	2022-01-27T03:20:47.190556960Z  INFO: detected cgroup v1
	2022-01-27T03:20:47.190560711Z  + echo 'INFO: fix cgroup mounts for all subsystems'
	2022-01-27T03:20:47.190564243Z  INFO: fix cgroup mounts for all subsystems
	2022-01-27T03:20:47.190579095Z  + local current_cgroup
	2022-01-27T03:20:47.191242527Z  ++ grep -E '^[^:]*:([^:]*,)?cpu(,[^,:]*)?:.*' /proc/self/cgroup
	2022-01-27T03:20:47.191384525Z  ++ cut -d: -f3
	2022-01-27T03:20:47.192509710Z  + current_cgroup=/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.192533255Z  + local cgroup_subsystems
	2022-01-27T03:20:47.193083746Z  ++ findmnt -lun -o source,target -t cgroup
	2022-01-27T03:20:47.193286746Z  ++ grep /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.193297825Z  ++ awk '{print $2}'
	2022-01-27T03:20:47.195545193Z  + cgroup_subsystems='/sys/fs/cgroup/systemd
	2022-01-27T03:20:47.195558735Z  /sys/fs/cgroup/devices
	2022-01-27T03:20:47.195562580Z  /sys/fs/cgroup/cpuset
	2022-01-27T03:20:47.195566179Z  /sys/fs/cgroup/cpu,cpuacct
	2022-01-27T03:20:47.195569759Z  /sys/fs/cgroup/net_cls,net_prio
	2022-01-27T03:20:47.195572564Z  /sys/fs/cgroup/perf_event
	2022-01-27T03:20:47.195575904Z  /sys/fs/cgroup/memory
	2022-01-27T03:20:47.195579307Z  /sys/fs/cgroup/freezer
	2022-01-27T03:20:47.195582520Z  /sys/fs/cgroup/blkio
	2022-01-27T03:20:47.195586101Z  /sys/fs/cgroup/pids
	2022-01-27T03:20:47.195589285Z  /sys/fs/cgroup/hugetlb'
	2022-01-27T03:20:47.195592999Z  + local cgroup_mounts
	2022-01-27T03:20:47.195951096Z  ++ grep -E -o '/[[:alnum:]].* /sys/fs/cgroup.*.*cgroup' /proc/self/mountinfo
	2022-01-27T03:20:47.197335264Z  + cgroup_mounts='/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:377 master:9 - cgroup cgroup
	2022-01-27T03:20:47.197348710Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:389 master:16 - cgroup cgroup
	2022-01-27T03:20:47.197353434Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:390 master:17 - cgroup cgroup
	2022-01-27T03:20:47.197357215Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime shared:391 master:18 - cgroup cgroup
	2022-01-27T03:20:47.197361482Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime shared:392 master:19 - cgroup cgroup
	2022-01-27T03:20:47.197365475Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:393 master:20 - cgroup cgroup
	2022-01-27T03:20:47.197369534Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:394 master:21 - cgroup cgroup
	2022-01-27T03:20:47.197373299Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:395 master:22 - cgroup cgroup
	2022-01-27T03:20:47.197387734Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:396 master:23 - cgroup cgroup
	2022-01-27T03:20:47.197391776Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:398 master:25 - cgroup cgroup
	2022-01-27T03:20:47.197395535Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:402 master:26 - cgroup cgroup'
	2022-01-27T03:20:47.197406287Z  + [[ -n /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:377 master:9 - cgroup cgroup
	2022-01-27T03:20:47.197422562Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:389 master:16 - cgroup cgroup
	2022-01-27T03:20:47.197427246Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:390 master:17 - cgroup cgroup
	2022-01-27T03:20:47.197436437Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime shared:391 master:18 - cgroup cgroup
	2022-01-27T03:20:47.197440239Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime shared:392 master:19 - cgroup cgroup
	2022-01-27T03:20:47.197443955Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:393 master:20 - cgroup cgroup
	2022-01-27T03:20:47.197447789Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:394 master:21 - cgroup cgroup
	2022-01-27T03:20:47.197451729Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:395 master:22 - cgroup cgroup
	2022-01-27T03:20:47.197455713Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:396 master:23 - cgroup cgroup
	2022-01-27T03:20:47.197459669Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:398 master:25 - cgroup cgroup
	2022-01-27T03:20:47.197463472Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:402 master:26 - cgroup cgroup ]]
	2022-01-27T03:20:47.197467361Z  + local mount_root
	2022-01-27T03:20:47.198054455Z  ++ head -n 1
	2022-01-27T03:20:47.198103330Z  ++ cut '-d ' -f1
	2022-01-27T03:20:47.199051645Z  + mount_root=/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.199662673Z  ++ echo '/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:377 master:9 - cgroup cgroup
	2022-01-27T03:20:47.199685710Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:389 master:16 - cgroup cgroup
	2022-01-27T03:20:47.199690432Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:390 master:17 - cgroup cgroup
	2022-01-27T03:20:47.199694524Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime shared:391 master:18 - cgroup cgroup
	2022-01-27T03:20:47.199698333Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime shared:392 master:19 - cgroup cgroup
	2022-01-27T03:20:47.199702114Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:393 master:20 - cgroup cgroup
	2022-01-27T03:20:47.199705822Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:394 master:21 - cgroup cgroup
	2022-01-27T03:20:47.199709341Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:395 master:22 - cgroup cgroup
	2022-01-27T03:20:47.199712928Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:396 master:23 - cgroup cgroup
	2022-01-27T03:20:47.199716629Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:398 master:25 - cgroup cgroup
	2022-01-27T03:20:47.199720928Z  /docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73 /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:402 master:26 - cgroup cgroup'
	2022-01-27T03:20:47.199730407Z  ++ cut '-d ' -f 2
	2022-01-27T03:20:47.200613271Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-01-27T03:20:47.200623940Z  + local target=/sys/fs/cgroup/systemd/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.200628293Z  + findmnt /sys/fs/cgroup/systemd/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.202331614Z  + mkdir -p /sys/fs/cgroup/systemd/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.203277191Z  + mount --bind /sys/fs/cgroup/systemd /sys/fs/cgroup/systemd/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.204649010Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-01-27T03:20:47.204662934Z  + local target=/sys/fs/cgroup/devices/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.204666890Z  + findmnt /sys/fs/cgroup/devices/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.206655257Z  + mkdir -p /sys/fs/cgroup/devices/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.207563679Z  + mount --bind /sys/fs/cgroup/devices /sys/fs/cgroup/devices/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.208759242Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-01-27T03:20:47.208777968Z  + local target=/sys/fs/cgroup/cpuset/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.208782352Z  + findmnt /sys/fs/cgroup/cpuset/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.210376065Z  + mkdir -p /sys/fs/cgroup/cpuset/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.259706772Z  + mount --bind /sys/fs/cgroup/cpuset /sys/fs/cgroup/cpuset/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.261148341Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-01-27T03:20:47.261165739Z  + local target=/sys/fs/cgroup/cpu,cpuacct/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.261170292Z  + findmnt /sys/fs/cgroup/cpu,cpuacct/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.263012899Z  + mkdir -p /sys/fs/cgroup/cpu,cpuacct/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.264161123Z  + mount --bind /sys/fs/cgroup/cpu,cpuacct /sys/fs/cgroup/cpu,cpuacct/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.265384063Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-01-27T03:20:47.265394611Z  + local target=/sys/fs/cgroup/net_cls,net_prio/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.265399019Z  + findmnt /sys/fs/cgroup/net_cls,net_prio/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.267127447Z  + mkdir -p /sys/fs/cgroup/net_cls,net_prio/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.268079348Z  + mount --bind /sys/fs/cgroup/net_cls,net_prio /sys/fs/cgroup/net_cls,net_prio/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.269387603Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-01-27T03:20:47.269401641Z  + local target=/sys/fs/cgroup/perf_event/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.269406079Z  + findmnt /sys/fs/cgroup/perf_event/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.270879668Z  + mkdir -p /sys/fs/cgroup/perf_event/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.271988753Z  + mount --bind /sys/fs/cgroup/perf_event /sys/fs/cgroup/perf_event/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.273326357Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-01-27T03:20:47.273339615Z  + local target=/sys/fs/cgroup/memory/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.273354827Z  + findmnt /sys/fs/cgroup/memory/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.274858641Z  + mkdir -p /sys/fs/cgroup/memory/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.275906644Z  + mount --bind /sys/fs/cgroup/memory /sys/fs/cgroup/memory/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.277070800Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-01-27T03:20:47.277079315Z  + local target=/sys/fs/cgroup/freezer/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.277081956Z  + findmnt /sys/fs/cgroup/freezer/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.278837139Z  + mkdir -p /sys/fs/cgroup/freezer/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.279861337Z  + mount --bind /sys/fs/cgroup/freezer /sys/fs/cgroup/freezer/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.281007114Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-01-27T03:20:47.281018441Z  + local target=/sys/fs/cgroup/blkio/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.281021221Z  + findmnt /sys/fs/cgroup/blkio/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.282612267Z  + mkdir -p /sys/fs/cgroup/blkio/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.283669944Z  + mount --bind /sys/fs/cgroup/blkio /sys/fs/cgroup/blkio/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.284877479Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-01-27T03:20:47.284886623Z  + local target=/sys/fs/cgroup/pids/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.284951024Z  + findmnt /sys/fs/cgroup/pids/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.286583277Z  + mkdir -p /sys/fs/cgroup/pids/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.287884254Z  + mount --bind /sys/fs/cgroup/pids /sys/fs/cgroup/pids/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.289105953Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-01-27T03:20:47.289129388Z  + local target=/sys/fs/cgroup/hugetlb/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.289133947Z  + findmnt /sys/fs/cgroup/hugetlb/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.290729543Z  + mkdir -p /sys/fs/cgroup/hugetlb/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.291974656Z  + mount --bind /sys/fs/cgroup/hugetlb /sys/fs/cgroup/hugetlb/docker/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73
	2022-01-27T03:20:47.293116875Z  + mount --make-rprivate /sys/fs/cgroup
	2022-01-27T03:20:47.295690418Z  + echo '/sys/fs/cgroup/systemd
	2022-01-27T03:20:47.295706180Z  /sys/fs/cgroup/devices
	2022-01-27T03:20:47.295710899Z  /sys/fs/cgroup/cpuset
	2022-01-27T03:20:47.295714509Z  /sys/fs/cgroup/cpu,cpuacct
	2022-01-27T03:20:47.295718168Z  /sys/fs/cgroup/net_cls,net_prio
	2022-01-27T03:20:47.295721688Z  /sys/fs/cgroup/perf_event
	2022-01-27T03:20:47.295725097Z  /sys/fs/cgroup/memory
	2022-01-27T03:20:47.295728372Z  /sys/fs/cgroup/freezer
	2022-01-27T03:20:47.295731403Z  /sys/fs/cgroup/blkio
	2022-01-27T03:20:47.295734418Z  /sys/fs/cgroup/pids
	2022-01-27T03:20:47.295737598Z  /sys/fs/cgroup/hugetlb'
	2022-01-27T03:20:47.295740548Z  + IFS=
	2022-01-27T03:20:47.295744030Z  + read -r subsystem
	2022-01-27T03:20:47.295747451Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/systemd
	2022-01-27T03:20:47.295751040Z  + local cgroup_root=/kubelet
	2022-01-27T03:20:47.295754529Z  + local subsystem=/sys/fs/cgroup/systemd
	2022-01-27T03:20:47.295757536Z  + '[' -z /kubelet ']'
	2022-01-27T03:20:47.295760523Z  + mkdir -p /sys/fs/cgroup/systemd//kubelet
	2022-01-27T03:20:47.296892202Z  + '[' /sys/fs/cgroup/systemd == /sys/fs/cgroup/cpuset ']'
	2022-01-27T03:20:47.296906443Z  + mount --bind /sys/fs/cgroup/systemd//kubelet /sys/fs/cgroup/systemd//kubelet
	2022-01-27T03:20:47.298369055Z  + IFS=
	2022-01-27T03:20:47.298382629Z  + read -r subsystem
	2022-01-27T03:20:47.298387441Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/devices
	2022-01-27T03:20:47.298390968Z  + local cgroup_root=/kubelet
	2022-01-27T03:20:47.298418429Z  + local subsystem=/sys/fs/cgroup/devices
	2022-01-27T03:20:47.298437195Z  + '[' -z /kubelet ']'
	2022-01-27T03:20:47.298440968Z  + mkdir -p /sys/fs/cgroup/devices//kubelet
	2022-01-27T03:20:47.299620974Z  + '[' /sys/fs/cgroup/devices == /sys/fs/cgroup/cpuset ']'
	2022-01-27T03:20:47.299635580Z  + mount --bind /sys/fs/cgroup/devices//kubelet /sys/fs/cgroup/devices//kubelet
	2022-01-27T03:20:47.300902854Z  + IFS=
	2022-01-27T03:20:47.300915676Z  + read -r subsystem
	2022-01-27T03:20:47.300918686Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/cpuset
	2022-01-27T03:20:47.300922234Z  + local cgroup_root=/kubelet
	2022-01-27T03:20:47.300953731Z  + local subsystem=/sys/fs/cgroup/cpuset
	2022-01-27T03:20:47.300962672Z  + '[' -z /kubelet ']'
	2022-01-27T03:20:47.300966552Z  + mkdir -p /sys/fs/cgroup/cpuset//kubelet
	2022-01-27T03:20:47.302086310Z  + '[' /sys/fs/cgroup/cpuset == /sys/fs/cgroup/cpuset ']'
	2022-01-27T03:20:47.302097190Z  + cat /sys/fs/cgroup/cpuset/cpuset.cpus
	2022-01-27T03:20:47.303871580Z  + cat /sys/fs/cgroup/cpuset/cpuset.mems
	2022-01-27T03:20:47.304694044Z  + mount --bind /sys/fs/cgroup/cpuset//kubelet /sys/fs/cgroup/cpuset//kubelet
	2022-01-27T03:20:47.306158899Z  + IFS=
	2022-01-27T03:20:47.306170750Z  + read -r subsystem
	2022-01-27T03:20:47.306175584Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/cpu,cpuacct
	2022-01-27T03:20:47.306179981Z  + local cgroup_root=/kubelet
	2022-01-27T03:20:47.306183056Z  + local subsystem=/sys/fs/cgroup/cpu,cpuacct
	2022-01-27T03:20:47.306225556Z  + '[' -z /kubelet ']'
	2022-01-27T03:20:47.306239251Z  + mkdir -p /sys/fs/cgroup/cpu,cpuacct//kubelet
	2022-01-27T03:20:47.307277423Z  + '[' /sys/fs/cgroup/cpu,cpuacct == /sys/fs/cgroup/cpuset ']'
	2022-01-27T03:20:47.307289046Z  + mount --bind /sys/fs/cgroup/cpu,cpuacct//kubelet /sys/fs/cgroup/cpu,cpuacct//kubelet
	2022-01-27T03:20:47.308611189Z  + IFS=
	2022-01-27T03:20:47.308633822Z  + read -r subsystem
	2022-01-27T03:20:47.308639496Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/net_cls,net_prio
	2022-01-27T03:20:47.308643661Z  + local cgroup_root=/kubelet
	2022-01-27T03:20:47.308647306Z  + local subsystem=/sys/fs/cgroup/net_cls,net_prio
	2022-01-27T03:20:47.308651077Z  + '[' -z /kubelet ']'
	2022-01-27T03:20:47.308655719Z  + mkdir -p /sys/fs/cgroup/net_cls,net_prio//kubelet
	2022-01-27T03:20:47.309653104Z  + '[' /sys/fs/cgroup/net_cls,net_prio == /sys/fs/cgroup/cpuset ']'
	2022-01-27T03:20:47.309665051Z  + mount --bind /sys/fs/cgroup/net_cls,net_prio//kubelet /sys/fs/cgroup/net_cls,net_prio//kubelet
	2022-01-27T03:20:47.310897882Z  + IFS=
	2022-01-27T03:20:47.310908734Z  + read -r subsystem
	2022-01-27T03:20:47.310911709Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/perf_event
	2022-01-27T03:20:47.310914493Z  + local cgroup_root=/kubelet
	2022-01-27T03:20:47.310916724Z  + local subsystem=/sys/fs/cgroup/perf_event
	2022-01-27T03:20:47.310924121Z  + '[' -z /kubelet ']'
	2022-01-27T03:20:47.310928085Z  + mkdir -p /sys/fs/cgroup/perf_event//kubelet
	2022-01-27T03:20:47.312030634Z  + '[' /sys/fs/cgroup/perf_event == /sys/fs/cgroup/cpuset ']'
	2022-01-27T03:20:47.312043985Z  + mount --bind /sys/fs/cgroup/perf_event//kubelet /sys/fs/cgroup/perf_event//kubelet
	2022-01-27T03:20:47.315193208Z  + IFS=
	2022-01-27T03:20:47.315207758Z  + read -r subsystem
	2022-01-27T03:20:47.315271535Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/memory
	2022-01-27T03:20:47.315277951Z  + local cgroup_root=/kubelet
	2022-01-27T03:20:47.315281649Z  + local subsystem=/sys/fs/cgroup/memory
	2022-01-27T03:20:47.315285203Z  + '[' -z /kubelet ']'
	2022-01-27T03:20:47.315339864Z  + mkdir -p /sys/fs/cgroup/memory//kubelet
	2022-01-27T03:20:47.316742379Z  + '[' /sys/fs/cgroup/memory == /sys/fs/cgroup/cpuset ']'
	2022-01-27T03:20:47.316753035Z  + mount --bind /sys/fs/cgroup/memory//kubelet /sys/fs/cgroup/memory//kubelet
	2022-01-27T03:20:47.318072269Z  + IFS=
	2022-01-27T03:20:47.318083510Z  + read -r subsystem
	2022-01-27T03:20:47.318088055Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/freezer
	2022-01-27T03:20:47.318092038Z  + local cgroup_root=/kubelet
	2022-01-27T03:20:47.318095762Z  + local subsystem=/sys/fs/cgroup/freezer
	2022-01-27T03:20:47.318099735Z  + '[' -z /kubelet ']'
	2022-01-27T03:20:47.318105182Z  + mkdir -p /sys/fs/cgroup/freezer//kubelet
	2022-01-27T03:20:47.319459076Z  + '[' /sys/fs/cgroup/freezer == /sys/fs/cgroup/cpuset ']'
	2022-01-27T03:20:47.319475907Z  + mount --bind /sys/fs/cgroup/freezer//kubelet /sys/fs/cgroup/freezer//kubelet
	2022-01-27T03:20:47.320698374Z  + IFS=
	2022-01-27T03:20:47.320711820Z  + read -r subsystem
	2022-01-27T03:20:47.320716068Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/blkio
	2022-01-27T03:20:47.320767918Z  + local cgroup_root=/kubelet
	2022-01-27T03:20:47.320782063Z  + local subsystem=/sys/fs/cgroup/blkio
	2022-01-27T03:20:47.320786498Z  + '[' -z /kubelet ']'
	2022-01-27T03:20:47.320790115Z  + mkdir -p /sys/fs/cgroup/blkio//kubelet
	2022-01-27T03:20:47.321774440Z  + '[' /sys/fs/cgroup/blkio == /sys/fs/cgroup/cpuset ']'
	2022-01-27T03:20:47.321787521Z  + mount --bind /sys/fs/cgroup/blkio//kubelet /sys/fs/cgroup/blkio//kubelet
	2022-01-27T03:20:47.323129600Z  + IFS=
	2022-01-27T03:20:47.323143413Z  + read -r subsystem
	2022-01-27T03:20:47.323147550Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/pids
	2022-01-27T03:20:47.323151225Z  + local cgroup_root=/kubelet
	2022-01-27T03:20:47.323189316Z  + local subsystem=/sys/fs/cgroup/pids
	2022-01-27T03:20:47.323196491Z  + '[' -z /kubelet ']'
	2022-01-27T03:20:47.323200195Z  + mkdir -p /sys/fs/cgroup/pids//kubelet
	2022-01-27T03:20:47.324231902Z  + '[' /sys/fs/cgroup/pids == /sys/fs/cgroup/cpuset ']'
	2022-01-27T03:20:47.324244329Z  + mount --bind /sys/fs/cgroup/pids//kubelet /sys/fs/cgroup/pids//kubelet
	2022-01-27T03:20:47.325570365Z  + IFS=
	2022-01-27T03:20:47.325584501Z  + read -r subsystem
	2022-01-27T03:20:47.325588834Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/hugetlb
	2022-01-27T03:20:47.325592770Z  + local cgroup_root=/kubelet
	2022-01-27T03:20:47.325598076Z  + local subsystem=/sys/fs/cgroup/hugetlb
	2022-01-27T03:20:47.325602059Z  + '[' -z /kubelet ']'
	2022-01-27T03:20:47.325605833Z  + mkdir -p /sys/fs/cgroup/hugetlb//kubelet
	2022-01-27T03:20:47.326909389Z  + '[' /sys/fs/cgroup/hugetlb == /sys/fs/cgroup/cpuset ']'
	2022-01-27T03:20:47.326935438Z  + mount --bind /sys/fs/cgroup/hugetlb//kubelet /sys/fs/cgroup/hugetlb//kubelet
	2022-01-27T03:20:47.328414143Z  + IFS=
	2022-01-27T03:20:47.328428200Z  + read -r subsystem
	2022-01-27T03:20:47.328707982Z  + return
	2022-01-27T03:20:47.328721092Z  + fix_machine_id
	2022-01-27T03:20:47.328756060Z  + echo 'INFO: clearing and regenerating /etc/machine-id'
	2022-01-27T03:20:47.328761482Z  INFO: clearing and regenerating /etc/machine-id
	2022-01-27T03:20:47.329042679Z  + rm -f /etc/machine-id
	2022-01-27T03:20:47.329789700Z  + systemd-machine-id-setup
	2022-01-27T03:20:47.333141762Z  Initializing machine ID from D-Bus machine ID.
	2022-01-27T03:20:47.341056744Z  + fix_product_name
	2022-01-27T03:20:47.341070468Z  + [[ -f /sys/class/dmi/id/product_name ]]
	2022-01-27T03:20:47.341073956Z  + echo 'INFO: faking /sys/class/dmi/id/product_name to be "kind"'
	2022-01-27T03:20:47.341076750Z  INFO: faking /sys/class/dmi/id/product_name to be "kind"
	2022-01-27T03:20:47.341079373Z  + echo kind
	2022-01-27T03:20:47.341247978Z  + mount -o ro,bind /kind/product_name /sys/class/dmi/id/product_name
	2022-01-27T03:20:47.342604838Z  + fix_product_uuid
	2022-01-27T03:20:47.342617737Z  + [[ ! -f /kind/product_uuid ]]
	2022-01-27T03:20:47.342621833Z  + cat /proc/sys/kernel/random/uuid
	2022-01-27T03:20:47.343665305Z  + [[ -f /sys/class/dmi/id/product_uuid ]]
	2022-01-27T03:20:47.343676408Z  + echo 'INFO: faking /sys/class/dmi/id/product_uuid to be random'
	2022-01-27T03:20:47.343680846Z  INFO: faking /sys/class/dmi/id/product_uuid to be random
	2022-01-27T03:20:47.343732413Z  + mount -o ro,bind /kind/product_uuid /sys/class/dmi/id/product_uuid
	2022-01-27T03:20:47.345008110Z  + [[ -f /sys/devices/virtual/dmi/id/product_uuid ]]
	2022-01-27T03:20:47.345020986Z  + echo 'INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well'
	2022-01-27T03:20:47.345025402Z  INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well
	2022-01-27T03:20:47.345029533Z  + mount -o ro,bind /kind/product_uuid /sys/devices/virtual/dmi/id/product_uuid
	2022-01-27T03:20:47.346325808Z  + select_iptables
	2022-01-27T03:20:47.346338846Z  + local mode=nft
	2022-01-27T03:20:47.347224711Z  ++ grep '^-'
	2022-01-27T03:20:47.347368118Z  ++ wc -l
	2022-01-27T03:20:47.350753972Z  + num_legacy_lines=6
	2022-01-27T03:20:47.350772021Z  + '[' 6 -ge 10 ']'
	2022-01-27T03:20:47.351613288Z  ++ grep '^-'
	2022-01-27T03:20:47.351757538Z  ++ wc -l
	2022-01-27T03:20:47.355375439Z  ++ true
	2022-01-27T03:20:47.355675776Z  + num_nft_lines=0
	2022-01-27T03:20:47.355690668Z  + '[' 6 -ge 0 ']'
	2022-01-27T03:20:47.355705077Z  + mode=legacy
	2022-01-27T03:20:47.355709371Z  + echo 'INFO: setting iptables to detected mode: legacy'
	2022-01-27T03:20:47.355713084Z  INFO: setting iptables to detected mode: legacy
	2022-01-27T03:20:47.355716712Z  + update-alternatives --set iptables /usr/sbin/iptables-legacy
	2022-01-27T03:20:47.355788541Z  + echo 'retryable update-alternatives: --set iptables /usr/sbin/iptables-legacy'
	2022-01-27T03:20:47.355800364Z  + local 'args=--set iptables /usr/sbin/iptables-legacy'
	2022-01-27T03:20:47.356173628Z  ++ seq 0 15
	2022-01-27T03:20:47.356759908Z  + for i in $(seq 0 15)
	2022-01-27T03:20:47.356774650Z  + /usr/bin/update-alternatives --set iptables /usr/sbin/iptables-legacy
	2022-01-27T03:20:47.360598194Z  + return
	2022-01-27T03:20:47.360612437Z  + update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
	2022-01-27T03:20:47.360674051Z  + echo 'retryable update-alternatives: --set ip6tables /usr/sbin/ip6tables-legacy'
	2022-01-27T03:20:47.360687182Z  + local 'args=--set ip6tables /usr/sbin/ip6tables-legacy'
	2022-01-27T03:20:47.361051614Z  ++ seq 0 15
	2022-01-27T03:20:47.361663451Z  + for i in $(seq 0 15)
	2022-01-27T03:20:47.361670272Z  + /usr/bin/update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
	2022-01-27T03:20:47.364034162Z  + return
	2022-01-27T03:20:47.364048370Z  + enable_network_magic
	2022-01-27T03:20:47.364101604Z  + local docker_embedded_dns_ip=127.0.0.11
	2022-01-27T03:20:47.364109069Z  + local docker_host_ip
	2022-01-27T03:20:47.365137633Z  ++ cut '-d ' -f1
	2022-01-27T03:20:47.365227796Z  ++ head -n1 /dev/fd/63
	2022-01-27T03:20:47.365322946Z  +++ getent ahostsv4 host.docker.internal
	2022-01-27T03:20:47.378644084Z  + docker_host_ip=
	2022-01-27T03:20:47.378659802Z  + [[ -z '' ]]
	2022-01-27T03:20:47.379262405Z  ++ ip -4 route show default
	2022-01-27T03:20:47.379413964Z  ++ cut '-d ' -f3
	2022-01-27T03:20:47.381168121Z  + docker_host_ip=192.168.76.1
	2022-01-27T03:20:47.381180176Z  + iptables-save
	2022-01-27T03:20:47.381374449Z  + iptables-restore
	2022-01-27T03:20:47.382626305Z  + sed -e 's/-d 127.0.0.11/-d 192.168.76.1/g' -e 's/-A OUTPUT \(.*\) -j DOCKER_OUTPUT/\0\n-A PREROUTING \1 -j DOCKER_OUTPUT/' -e 's/--to-source :53/--to-source 192.168.76.1:53/g'
	2022-01-27T03:20:47.385354998Z  + cp /etc/resolv.conf /etc/resolv.conf.original
	2022-01-27T03:20:47.386587984Z  + sed -e s/127.0.0.11/192.168.76.1/g /etc/resolv.conf.original
	2022-01-27T03:20:47.388760866Z  ++ cut '-d ' -f1
	2022-01-27T03:20:47.388826197Z  ++ head -n1 /dev/fd/63
	2022-01-27T03:20:47.389314600Z  ++++ hostname
	2022-01-27T03:20:47.390061987Z  +++ getent ahostsv4 old-k8s-version-20220127031714-6703
	2022-01-27T03:20:47.391811311Z  + curr_ipv4=192.168.76.2
	2022-01-27T03:20:47.391823932Z  + echo 'INFO: Detected IPv4 address: 192.168.76.2'
	2022-01-27T03:20:47.391827881Z  INFO: Detected IPv4 address: 192.168.76.2
	2022-01-27T03:20:47.391831551Z  + '[' -f /kind/old-ipv4 ']'
	2022-01-27T03:20:47.391871387Z  + [[ -n 192.168.76.2 ]]
	2022-01-27T03:20:47.391896298Z  + echo -n 192.168.76.2
	2022-01-27T03:20:47.392941547Z  ++ cut '-d ' -f1
	2022-01-27T03:20:47.392993273Z  ++ head -n1 /dev/fd/63
	2022-01-27T03:20:47.393459982Z  ++++ hostname
	2022-01-27T03:20:47.394146601Z  +++ getent ahostsv6 old-k8s-version-20220127031714-6703
	2022-01-27T03:20:47.395592698Z  + curr_ipv6=
	2022-01-27T03:20:47.395606879Z  + echo 'INFO: Detected IPv6 address: '
	2022-01-27T03:20:47.395611261Z  INFO: Detected IPv6 address: 
	2022-01-27T03:20:47.395615529Z  + '[' -f /kind/old-ipv6 ']'
	2022-01-27T03:20:47.395619051Z  + [[ -n '' ]]
	2022-01-27T03:20:47.396029844Z  ++ uname -a
	2022-01-27T03:20:47.396616441Z  + echo 'entrypoint completed: Linux old-k8s-version-20220127031714-6703 5.11.0-1028-gcp #32~20.04.1-Ubuntu SMP Wed Jan 12 20:08:27 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux'
	2022-01-27T03:20:47.396628884Z  entrypoint completed: Linux old-k8s-version-20220127031714-6703 5.11.0-1028-gcp #32~20.04.1-Ubuntu SMP Wed Jan 12 20:08:27 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	2022-01-27T03:20:47.396633258Z  + exec /sbin/init
	2022-01-27T03:20:47.402732713Z  systemd 245.4-4ubuntu3.13 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid)
	2022-01-27T03:20:47.402753531Z  Detected virtualization docker.
	2022-01-27T03:20:47.402756770Z  Detected architecture x86-64.
	2022-01-27T03:20:47.403069664Z  
	2022-01-27T03:20:47.403083447Z  Welcome to Ubuntu 20.04.2 LTS!
	2022-01-27T03:20:47.403088210Z  
	2022-01-27T03:20:47.403099715Z  Set hostname to <old-k8s-version-20220127031714-6703>.
	2022-01-27T03:20:47.443047709Z  [  OK  ] Started Dispatch Password …ts to Console Directory Watch.
	2022-01-27T03:20:47.443200554Z  [  OK  ] Set up automount Arbitrary…s File System Automount Point.
	2022-01-27T03:20:47.443211734Z  [  OK  ] Reached target Local Encrypted Volumes.
	2022-01-27T03:20:47.443331683Z  [  OK  ] Reached target Network is Online.
	2022-01-27T03:20:47.443359517Z  [  OK  ] Reached target Paths.
	2022-01-27T03:20:47.443365370Z  [  OK  ] Reached target Slices.
	2022-01-27T03:20:47.443384888Z  [  OK  ] Reached target Swap.
	2022-01-27T03:20:47.443589802Z  [  OK  ] Listening on Journal Audit Socket.
	2022-01-27T03:20:47.443687901Z  [  OK  ] Listening on Journal Socket (/dev/log).
	2022-01-27T03:20:47.443885779Z  [  OK  ] Listening on Journal Socket.
	2022-01-27T03:20:47.445256615Z           Mounting Huge Pages File System...
	2022-01-27T03:20:47.446595182Z           Mounting Kernel Debug File System...
	2022-01-27T03:20:47.447959607Z           Mounting Kernel Trace File System...
	2022-01-27T03:20:47.449593893Z           Starting Journal Service...
	2022-01-27T03:20:47.450892583Z           Starting Create list of st…odes for the current kernel...
	2022-01-27T03:20:47.453658170Z           Mounting FUSE Control File System...
	2022-01-27T03:20:47.454086040Z           Starting Remount Root and Kernel File Systems...
	2022-01-27T03:20:47.455615606Z           Starting Apply Kernel Variables...
	2022-01-27T03:20:47.457385657Z  [  OK  ] Mounted Huge Pages File System.
	2022-01-27T03:20:47.457526517Z  [  OK  ] Mounted Kernel Debug File System.
	2022-01-27T03:20:47.457659596Z  [  OK  ] Mounted Kernel Trace File System.
	2022-01-27T03:20:47.458363685Z  [  OK  ] Finished Create list of st… nodes for the current kernel.
	2022-01-27T03:20:47.458615255Z  [  OK  ] Mounted FUSE Control File System.
	2022-01-27T03:20:47.459296153Z  [  OK  ] Finished Remount Root and Kernel File Systems.
	2022-01-27T03:20:47.460913233Z           Starting Create System Users...
	2022-01-27T03:20:47.462326356Z           Starting Update UTMP about System Boot/Shutdown...
	2022-01-27T03:20:47.463238748Z  [  OK  ] Finished Apply Kernel Variables.
	2022-01-27T03:20:47.468868438Z  [  OK  ] Finished Update UTMP about System Boot/Shutdown.
	2022-01-27T03:20:47.484458728Z  [  OK  ] Finished Create System Users.
	2022-01-27T03:20:47.484484145Z           Starting Create Static Device Nodes in /dev...
	2022-01-27T03:20:47.484554140Z  [  OK  ] Started Journal Service.
	2022-01-27T03:20:47.486957644Z           Starting Flush Journal to Persistent Storage...
	2022-01-27T03:20:47.492010442Z  [  OK  ] Finished Flush Journal to Persistent Storage.
	2022-01-27T03:20:47.492764333Z  [  OK  ] Finished Create Static Device Nodes in /dev.
	2022-01-27T03:20:47.492920578Z  [  OK  ] Reached target Local File Systems (Pre).
	2022-01-27T03:20:47.492939985Z  [  OK  ] Reached target Local File Systems.
	2022-01-27T03:20:47.493175471Z  [  OK  ] Reached target System Initialization.
	2022-01-27T03:20:47.493190618Z  [  OK  ] Started Daily Cleanup of Temporary Directories.
	2022-01-27T03:20:47.493219232Z  [  OK  ] Reached target Timers.
	2022-01-27T03:20:47.493368945Z  [  OK  ] Listening on BuildKit.
	2022-01-27T03:20:47.493479589Z  [  OK  ] Listening on D-Bus System Message Bus Socket.
	2022-01-27T03:20:47.494635433Z           Starting Docker Socket for the API.
	2022-01-27T03:20:47.504876585Z           Starting Podman API Socket.
	2022-01-27T03:20:47.506370893Z  [  OK  ] Listening on Docker Socket for the API.
	2022-01-27T03:20:47.506437039Z  [  OK  ] Listening on Podman API Socket.
	2022-01-27T03:20:47.506488816Z  [  OK  ] Reached target Sockets.
	2022-01-27T03:20:47.506572778Z  [  OK  ] Reached target Basic System.
	2022-01-27T03:20:47.507768967Z           Starting containerd container runtime...
	2022-01-27T03:20:47.509330271Z  [  OK  ] Started D-Bus System Message Bus.
	2022-01-27T03:20:47.512665640Z           Starting minikube automount...
	2022-01-27T03:20:47.514091163Z           Starting OpenBSD Secure Shell server...
	2022-01-27T03:20:47.528967592Z  [  OK  ] Finished minikube automount.
	2022-01-27T03:20:47.559382637Z  [  OK  ] Started OpenBSD Secure Shell server.
	2022-01-27T03:20:47.570200838Z  [  OK  ] Started containerd container runtime.
	2022-01-27T03:20:47.571364894Z           Starting Docker Application Container Engine...
	2022-01-27T03:20:47.818074788Z  [  OK  ] Started Docker Application Container Engine.
	2022-01-27T03:20:47.818224760Z  [  OK  ] Reached target Multi-User System.
	2022-01-27T03:20:47.818252499Z  [  OK  ] Reached target Graphical Interface.
	2022-01-27T03:20:47.820245627Z           Starting Update UTMP about System Runlevel Changes...
	2022-01-27T03:20:47.827923748Z  [  OK  ] Finished Update UTMP about System Runlevel Changes.
	2022-01-27T03:23:02.194811082Z  [  OK  ] Stopped target Graphical Interface.
	2022-01-27T03:23:02.194842988Z  [  OK  ] Stopped target Multi-User System.
	2022-01-27T03:23:02.194847838Z  [  OK  ] Stopped target Timers.
	2022-01-27T03:23:02.194853595Z  [  OK  ] Stopped Daily Cleanup of Temporary Directories.
	2022-01-27T03:23:02.194857349Z           Stopping D-Bus System Message Bus...
	2022-01-27T03:23:02.194861148Z           Stopping Docker Application Container Engine...
	2022-01-27T03:23:02.194865006Z           Stopping kubelet: The Kubernetes Node Agent...
	2022-01-27T03:23:02.195355260Z           Stopping OpenBSD Secure Shell server...
	2022-01-27T03:23:02.198576065Z  [  OK  ] Stopped D-Bus System Message Bus.
	2022-01-27T03:23:02.198595425Z  [  OK  ] Stopped OpenBSD Secure Shell server.
	2022-01-27T03:23:02.204697231Z  [  OK  ] Stopped Docker Application Container Engine.
	2022-01-27T03:23:02.204986944Z  [  OK  ] Stopped target Network is Online.
	2022-01-27T03:23:02.206431061Z           Stopping containerd container runtime...
	2022-01-27T03:23:02.206448978Z  [  OK  ] Stopped minikube automount.
	2022-01-27T03:23:02.216784115Z  [  OK  ] Stopped containerd container runtime.
	2022-01-27T03:23:02.264325242Z  [  OK  ] Stopped kubelet: The Kubernetes Node Agent.
	2022-01-27T03:23:02.264704680Z  [  OK  ] Stopped target Basic System.
	2022-01-27T03:23:02.264723885Z  [  OK  ] Stopped target Paths.
	2022-01-27T03:23:02.264729048Z  [  OK  ] Stopped target Slices.
	2022-01-27T03:23:02.264733528Z  [  OK  ] Stopped target Sockets.
	2022-01-27T03:23:02.288345511Z  [  OK  ] Closed BuildKit.
	2022-01-27T03:23:02.288956433Z  [  OK  ] Closed D-Bus System Message Bus Socket.
	2022-01-27T03:23:02.289708240Z  [  OK  ] Closed Docker Socket for the API.
	2022-01-27T03:23:02.290457967Z  [  OK  ] Closed Podman API Socket.
	2022-01-27T03:23:02.290476761Z  [  OK  ] Stopped target System Initialization.
	2022-01-27T03:23:02.290493320Z  [  OK  ] Stopped target Local Encrypted Volumes.
	2022-01-27T03:23:02.307436732Z  [  OK  ] Stopped Dispatch Password …ts to Console Directory Watch.
	2022-01-27T03:23:02.307457754Z  [  OK  ] Stopped target Local File Systems.
	2022-01-27T03:23:02.309014431Z           Unmounting /data...
	2022-01-27T03:23:02.309829255Z           Unmounting /etc/hostname...
	2022-01-27T03:23:02.310035643Z           Unmounting /etc/hosts...
	2022-01-27T03:23:02.311234762Z           Unmounting /etc/resolv.conf...
	2022-01-27T03:23:02.312449535Z           Unmounting /kind/product_uuid...
	2022-01-27T03:23:02.313812744Z           Unmounting /run/containerd…333eb8f1abddaa8d37b4667/shm...
	2022-01-27T03:23:02.315416772Z           Unmounting /run/containerd…b382f20f17ed1e7d2f76d68/shm...
	2022-01-27T03:23:02.319840048Z           Unmounting /run/containerd…75b219f31bc9edee584b704/shm...
	2022-01-27T03:23:02.320264630Z           Unmounting /run/containerd…39840f454273bb78191fc32/shm...
	2022-01-27T03:23:02.321714746Z           Unmounting /run/containerd…dc402c6ed887247644feb27/shm...
	2022-01-27T03:23:02.323246109Z           Unmounting /run/containerd…3f5929d8fa8ca68e230481e/shm...
	2022-01-27T03:23:02.324919681Z           Unmounting /run/containerd…de332074adfb50337f4a368/shm...
	2022-01-27T03:23:02.326499973Z           Unmounting /run/containerd…8360c732a8ec4aa05e5e56b/shm...
	2022-01-27T03:23:02.327899954Z           Unmounting /run/containerd…bfc3835e27f59803443078d/shm...
	2022-01-27T03:23:02.329231764Z           Unmounting /run/containerd…87635b19be3d68c920eb4de/shm...
	2022-01-27T03:23:02.330707419Z           Unmounting /run/containerd…977691262457ab45bbbc/rootfs...
	2022-01-27T03:23:02.333533545Z           Unmounting /run/containerd…eb8f1abddaa8d37b4667/rootfs...
	2022-01-27T03:23:02.333930464Z           Unmounting /run/containerd…566b44267b54ef7194f4/rootfs...
	2022-01-27T03:23:02.335481104Z           Unmounting /run/containerd…2f20f17ed1e7d2f76d68/rootfs...
	2022-01-27T03:23:02.337047649Z           Unmounting /run/containerd…219f31bc9edee584b704/rootfs...
	2022-01-27T03:23:02.341823498Z           Unmounting /run/containerd…5da2d89660bd8e82913e/rootfs...
	2022-01-27T03:23:02.344577699Z           Unmounting /run/containerd…9f25fbbc2d6f17e2f240/rootfs...
	2022-01-27T03:23:02.345132248Z           Unmounting /run/containerd…474c6d3e87f6eec23012/rootfs...
	2022-01-27T03:23:02.346601697Z           Unmounting /run/containerd…40f454273bb78191fc32/rootfs...
	2022-01-27T03:23:02.349569546Z           Unmounting /run/containerd…02c6ed887247644feb27/rootfs...
	2022-01-27T03:23:02.350985168Z           Unmounting /run/containerd…929d8fa8ca68e230481e/rootfs...
	2022-01-27T03:23:02.352325791Z           Unmounting /run/containerd…d87c0b33934d4b4b123c/rootfs...
	2022-01-27T03:23:02.354293693Z           Unmounting /run/containerd…6ba4d3864c9d7ae64eef/rootfs...
	2022-01-27T03:23:02.357096190Z           Unmounting /run/containerd…32074adfb50337f4a368/rootfs...
	2022-01-27T03:23:02.358719716Z           Unmounting /run/containerd…f1be103a2e440cafee66/rootfs...
	2022-01-27T03:23:02.358994660Z           Unmounting /run/containerd…0c732a8ec4aa05e5e56b/rootfs...
	2022-01-27T03:23:02.361457541Z           Unmounting /run/containerd…3835e27f59803443078d/rootfs...
	2022-01-27T03:23:02.362077439Z           Unmounting /run/containerd…35b19be3d68c920eb4de/rootfs...
	2022-01-27T03:23:02.363657459Z           Unmounting /run/containerd…27dcd99bfc253fc52444/rootfs...
	2022-01-27T03:23:02.366007247Z           Unmounting /run/netns/cni-…4a10-0868-3cef-9b80a197117b...
	2022-01-27T03:23:02.367507239Z           Unmounting /run/netns/cni-…a093-5e1e-df89-09f5cdd14b38...
	2022-01-27T03:23:02.368529105Z           Unmounting /run/netns/cni-…f584-82a2-7936-e1086ddd4a4a...
	2022-01-27T03:23:02.369138738Z           Unmounting /tmp/hostpath-provisioner...
	2022-01-27T03:23:02.370387568Z           Unmounting /tmp/hostpath_pv...
	2022-01-27T03:23:02.371613699Z           Unmounting /usr/lib/modules...
	2022-01-27T03:23:02.405124247Z           Unmounting /var/lib/kubele…~secret/default-token-rgcg4...
	2022-01-27T03:23:02.406792162Z           Unmounting /var/lib/kubele…~secret/coredns-token-6zthz...
	2022-01-27T03:23:02.408559841Z           Unmounting /var/lib/kubele…cret/kube-proxy-token-p2864...
	2022-01-27T03:23:02.410039918Z           Unmounting /var/lib/kubele…~secret/kindnet-token-lxwm4...
	2022-01-27T03:23:02.411469128Z           Unmounting /var/lib/kubele…age-provisioner-token-2xkxn...
	2022-01-27T03:23:02.412974994Z           Unmounting /var/lib/kubele…/metrics-server-token-hrtz7...
	2022-01-27T03:23:02.413785907Z  [  OK  ] Stopped Apply Kernel Variables.
	2022-01-27T03:23:02.414537824Z           Stopping Update UTMP about System Boot/Shutdown...
	2022-01-27T03:23:02.420484303Z  [  OK  ] Unmounted /data.
	2022-01-27T03:23:02.421170819Z  [  OK  ] Unmounted /etc/hostname.
	2022-01-27T03:23:02.421923617Z  [  OK  ] Unmounted /etc/hosts.
	2022-01-27T03:23:02.422593655Z  [  OK  ] Unmounted /etc/resolv.conf.
	2022-01-27T03:23:02.423382972Z  [  OK  ] Unmounted /kind/product_uuid.
	2022-01-27T03:23:02.424057698Z  [  OK  ] Unmounted /run/containerd/…2d333eb8f1abddaa8d37b4667/shm.
	2022-01-27T03:23:02.424801286Z  [  OK  ] Unmounted /run/containerd/…19b382f20f17ed1e7d2f76d68/shm.
	2022-01-27T03:23:02.425479253Z  [  OK  ] Unmounted /run/containerd/…2275b219f31bc9edee584b704/shm.
	2022-01-27T03:23:02.426888904Z  [  OK  ] Unmounted /run/containerd/…8739840f454273bb78191fc32/shm.
	2022-01-27T03:23:02.426922023Z  [  OK  ] Unmounted /run/containerd/…7adc402c6ed887247644feb27/shm.
	2022-01-27T03:23:02.428027451Z  [  OK  ] Unmounted /run/containerd/…173f5929d8fa8ca68e230481e/shm.
	2022-01-27T03:23:02.429020159Z  [  OK  ] Unmounted /run/containerd/…d4de332074adfb50337f4a368/shm.
	2022-01-27T03:23:02.429921142Z  [  OK  ] Unmounted /run/containerd/…518360c732a8ec4aa05e5e56b/shm.
	2022-01-27T03:23:02.430634884Z  [  OK  ] Unmounted /run/containerd/…77bfc3835e27f59803443078d/shm.
	2022-01-27T03:23:02.431348264Z  [  OK  ] Unmounted /run/containerd/…9b87635b19be3d68c920eb4de/shm.
	2022-01-27T03:23:02.432001364Z  [  OK  ] Unmounted /run/containerd/…3f977691262457ab45bbbc/rootfs.
	2022-01-27T03:23:02.432692850Z  [  OK  ] Unmounted /run/containerd/…33eb8f1abddaa8d37b4667/rootfs.
	2022-01-27T03:23:02.433323868Z  [  OK  ] Unmounted /run/containerd/…e4566b44267b54ef7194f4/rootfs.
	2022-01-27T03:23:02.433950525Z  [  OK  ] Unmounted /run/containerd/…382f20f17ed1e7d2f76d68/rootfs.
	2022-01-27T03:23:02.434687946Z  [  OK  ] Unmounted /run/containerd/…5b219f31bc9edee584b704/rootfs.
	2022-01-27T03:23:02.435377009Z  [  OK  ] Unmounted /run/containerd/…e55da2d89660bd8e82913e/rootfs.
	2022-01-27T03:23:02.436104323Z  [  OK  ] Unmounted /run/containerd/…4d9f25fbbc2d6f17e2f240/rootfs.
	2022-01-27T03:23:02.437021940Z  [  OK  ] Unmounted /run/containerd/…7d474c6d3e87f6eec23012/rootfs.
	2022-01-27T03:23:02.437815741Z  [  OK  ] Unmounted /run/containerd/…9840f454273bb78191fc32/rootfs.
	2022-01-27T03:23:02.438643995Z  [  OK  ] Unmounted /run/containerd/…c402c6ed887247644feb27/rootfs.
	2022-01-27T03:23:02.439332380Z  [  OK  ] Unmounted /run/containerd/…f5929d8fa8ca68e230481e/rootfs.
	2022-01-27T03:23:02.440056252Z  [  OK  ] Unmounted /run/containerd/…6bd87c0b33934d4b4b123c/rootfs.
	2022-01-27T03:23:02.440749778Z  [  OK  ] Unmounted /run/containerd/…b36ba4d3864c9d7ae64eef/rootfs.
	2022-01-27T03:23:02.441495514Z  [  OK  ] Unmounted /run/containerd/…e332074adfb50337f4a368/rootfs.
	2022-01-27T03:23:02.442175134Z  [  OK  ] Unmounted /run/containerd/…1af1be103a2e440cafee66/rootfs.
	2022-01-27T03:23:02.442870762Z  [  OK  ] Unmounted /run/containerd/…360c732a8ec4aa05e5e56b/rootfs.
	2022-01-27T03:23:02.444332461Z  [  OK  ] Unmounted /run/containerd/…fc3835e27f59803443078d/rootfs.
	2022-01-27T03:23:02.445016691Z  [  OK  ] Unmounted /run/containerd/…7635b19be3d68c920eb4de/rootfs.
	2022-01-27T03:23:02.445803319Z  [  OK  ] Unmounted /run/containerd/…0727dcd99bfc253fc52444/rootfs.
	2022-01-27T03:23:02.446673394Z  [  OK  ] Unmounted /run/netns/cni-4…c-4a10-0868-3cef-9b80a197117b.
	2022-01-27T03:23:02.447445728Z  [  OK  ] Unmounted /run/netns/cni-8…9-a093-5e1e-df89-09f5cdd14b38.
	2022-01-27T03:23:02.448159741Z  [  OK  ] Unmounted /run/netns/cni-b…c-f584-82a2-7936-e1086ddd4a4a.
	2022-01-27T03:23:02.448771024Z  [  OK  ] Unmounted /tmp/hostpath-provisioner.
	2022-01-27T03:23:02.449386207Z  [  OK  ] Unmounted /tmp/hostpath_pv.
	2022-01-27T03:23:02.449946337Z  [  OK  ] Unmounted /usr/lib/modules.
	2022-01-27T03:23:02.450531280Z  [  OK  ] Unmounted /var/lib/kubelet…io~secret/default-token-rgcg4.
	2022-01-27T03:23:02.451200193Z  [  OK  ] Unmounted /var/lib/kubelet…io~secret/coredns-token-6zthz.
	2022-01-27T03:23:02.454852331Z  [  OK  ] Unmounted /var/lib/kubelet…secret/kube-proxy-token-p2864.
	2022-01-27T03:23:02.455560223Z  [  OK  ] Unmounted /var/lib/kubelet…io~secret/kindnet-token-lxwm4.
	2022-01-27T03:23:02.456196000Z  [  OK  ] Unmounted /var/lib/kubelet…orage-provisioner-token-2xkxn.
	2022-01-27T03:23:02.456822603Z  [  OK  ] Unmounted /var/lib/kubelet…et/metrics-server-token-hrtz7.
	2022-01-27T03:23:02.459763314Z           Unmounting /tmp...
	2022-01-27T03:23:02.460858579Z  [  OK  ] Stopped Update UTMP about System Boot/Shutdown.
	2022-01-27T03:23:02.462334666Z           Unmounting /var...
	2022-01-27T03:23:02.464297200Z  [  OK  ] Unmounted /tmp.
	2022-01-27T03:23:02.464398759Z  [  OK  ] Stopped target Swap.
	2022-01-27T03:23:02.467486577Z  [  OK  ] Unmounted /var.
	2022-01-27T03:23:02.467575713Z  [  OK  ] Stopped target Local File Systems (Pre).
	2022-01-27T03:23:02.467624841Z  [  OK  ] Reached target Unmount All Filesystems.
	2022-01-27T03:23:02.469178986Z  [  OK  ] Stopped Create Static Device Nodes in /dev.
	2022-01-27T03:23:02.470361448Z  [  OK  ] Stopped Create System Users.
	2022-01-27T03:23:02.470890463Z  [  OK  ] Stopped Remount Root and Kernel File Systems.
	2022-01-27T03:23:02.470904208Z  [  OK  ] Reached target Shutdown.
	2022-01-27T03:23:02.470908598Z  [  OK  ] Reached target Final Step.
	2022-01-27T03:23:02.471133196Z  [  OK  ] Finished Power-Off.
	2022-01-27T03:23:02.471143916Z  [  OK  ] Reached target Power-Off.
	
	-- /stdout --
	I0127 03:23:31.514141  234652 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0127 03:23:31.660069  234652 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:45 SystemTime:2022-01-27 03:23:31.56711463 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1028-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668890624 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0127 03:23:31.660151  234652 errors.go:98] postmortem docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:45 SystemTime:2022-01-27 03:23:31.56711463 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1028-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668890624 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0127 03:23:31.660230  234652 network_create.go:254] running [docker network inspect old-k8s-version-20220127031714-6703] to gather additional debugging logs...
	I0127 03:23:31.660244  234652 cli_runner.go:133] Run: docker network inspect old-k8s-version-20220127031714-6703
	W0127 03:23:31.695815  234652 cli_runner.go:180] docker network inspect old-k8s-version-20220127031714-6703 returned with exit code 1
	I0127 03:23:31.695851  234652 network_create.go:257] error running [docker network inspect old-k8s-version-20220127031714-6703]: docker network inspect old-k8s-version-20220127031714-6703: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220127031714-6703
	I0127 03:23:31.695872  234652 network_create.go:259] output of [docker network inspect old-k8s-version-20220127031714-6703]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220127031714-6703
	
	** /stderr **
	I0127 03:23:31.695958  234652 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0127 03:23:31.812666  234652 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:49 SystemTime:2022-01-27 03:23:31.737632243 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1028-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668890624 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0127 03:23:31.813114  234652 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220127031714-6703
	I0127 03:23:31.861903  234652 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/old-k8s-version-20220127031714-6703/config.json ...
	I0127 03:23:31.862172  234652 machine.go:88] provisioning docker machine ...
	I0127 03:23:31.862196  234652 ubuntu.go:169] provisioning hostname "old-k8s-version-20220127031714-6703"
	I0127 03:23:31.862239  234652 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220127031714-6703
	W0127 03:23:31.907037  234652 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220127031714-6703 returned with exit code 1
	I0127 03:23:31.907122  234652 machine.go:91] provisioned docker machine in 44.935259ms
	I0127 03:23:31.907179  234652 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 03:23:31.907221  234652 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220127031714-6703
	W0127 03:23:31.950986  234652 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220127031714-6703 returned with exit code 1
	I0127 03:23:31.951131  234652 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0127 03:23:32.182524  234652 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220127031714-6703
	W0127 03:23:32.230557  234652 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220127031714-6703 returned with exit code 1
	I0127 03:23:32.230686  234652 retry.go:31] will retry after 445.058653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0127 03:23:32.676314  234652 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220127031714-6703
	W0127 03:23:32.743180  234652 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220127031714-6703 returned with exit code 1
	I0127 03:23:32.743297  234652 retry.go:31] will retry after 318.170823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0127 03:23:33.061731  234652 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220127031714-6703
	W0127 03:23:33.094585  234652 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220127031714-6703 returned with exit code 1
	I0127 03:23:33.094682  234652 retry.go:31] will retry after 553.938121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0127 03:23:33.648999  234652 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220127031714-6703
	W0127 03:23:33.693640  234652 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220127031714-6703 returned with exit code 1
	W0127 03:23:33.693766  234652 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0127 03:23:33.693788  234652 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0127 03:23:33.693799  234652 fix.go:57] fixHost completed within 3.390200325s
	I0127 03:23:33.693815  234652 start.go:80] releasing machines lock for "old-k8s-version-20220127031714-6703", held for 3.39023466s
	W0127 03:23:33.694010  234652 out.go:241] * Failed to start docker container. Running "minikube delete -p old-k8s-version-20220127031714-6703" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	* Failed to start docker container. Running "minikube delete -p old-k8s-version-20220127031714-6703" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0127 03:23:33.695490  234652 out.go:176] 
	W0127 03:23:33.695620  234652 out.go:241] X Exiting due to GUEST_PROVISION_CONTAINER_EXITED: Docker container exited prematurely after it was created, consider investigating Docker's performance/health.
	X Exiting due to GUEST_PROVISION_CONTAINER_EXITED: Docker container exited prematurely after it was created, consider investigating Docker's performance/health.
	I0127 03:23:33.697105  234652 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:244: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-20220127031714-6703 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20220127031714-6703
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20220127031714-6703:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73",
	        "Created": "2022-01-27T03:20:46.660393198Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 137,
	            "Error": "network 75758b286fdf1dad635bb8a0f3fdd8dd7a6e676c24baa1342fb095c73ec27f87 not found",
	            "StartedAt": "2022-01-27T03:20:47.166723014Z",
	            "FinishedAt": "2022-01-27T03:23:21.951827406Z"
	        },
	        "Image": "sha256:64d09634c60d2a75596bc705aa84bdc41f76fe47c5d9ee362550bffbdc256979",
	        "ResolvConfPath": "/var/lib/docker/containers/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73/hostname",
	        "HostsPath": "/var/lib/docker/containers/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73/hosts",
	        "LogPath": "/var/lib/docker/containers/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73-json.log",
	        "Name": "/old-k8s-version-20220127031714-6703",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220127031714-6703:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220127031714-6703",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/970e93133d8fce1e1cc56e1ea44d6d0e67fcb8d32198cf7770f3898c71765d9b-init/diff:/var/lib/docker/overlay2/16963715367ada0e39ac0ff75961ba8f4191279cc4ff8b9fd9fd28a2afd586a9/diff:/var/lib/docker/overlay2/cf3bc92115c85438ae77ffc60f5cb639c3968a2c050d4a65b31cd6fccfa41416/diff:/var/lib/docker/overlay2/218e751ff7f51ec2e7fadb35a20c5a8ea67c3b1fd9c070ad8816e2ab30fdd9e4/diff:/var/lib/docker/overlay2/bf9ba576be8949fd04e2902858d2da23ce5cd4ba75e8e532dcf79ee7b42b1908/diff:/var/lib/docker/overlay2/8a96f75d3963c09f4f38ba2dc19523e12f658d2ed81533b22cf9cd9057083153/diff:/var/lib/docker/overlay2/5ee8bc1f5021db1aa75bbf3fb3060fab633d7e9f5a618431b3dc5d8a40fbdd2f/diff:/var/lib/docker/overlay2/c2d261af5836940de0b3aff73ea5e1328576bb1a856847be0e7afced9977e7d6/diff:/var/lib/docker/overlay2/14d653eab7bf5140d2e59be7764608906b78b45fc00c1618ff27987917119e60/diff:/var/lib/docker/overlay2/98223f44c4eceaa2e2d3f5d58e0fedda224657dcca26fb238aa0adec9753395d/diff:/var/lib/docker/overlay2/281afd
7c879ee9ec2ed091125655b6d2471ebfba9d70f828f16d582ccd389d51/diff:/var/lib/docker/overlay2/051e1076d1ab90fd612cf6a1357f63367f2661aaf90bc242f4e0bc93ce78a37b/diff:/var/lib/docker/overlay2/f6a47c5c1b4fde4d27c0d8f0a804be3d88995b705c5683976ac96ab692e5d79c/diff:/var/lib/docker/overlay2/ea4bae1904f2e590771d839dff6c6213846ccc1cc1fbaadf7444396e4dcc2a17/diff:/var/lib/docker/overlay2/789935353e9a43b8382bc8319c2f7f598462bd83a26a3c822d3481f62ff3afe3/diff:/var/lib/docker/overlay2/8ba6338f7d22201db727697f81bb55a260d179c2853681b855cfe641eaaf5f44/diff:/var/lib/docker/overlay2/3afbc0354287f271a5fa34e1ce8c58bac282ecb529fae2e3fc083a2e11b9a108/diff:/var/lib/docker/overlay2/6d064d2ddd73c69a0ca655bd5c632e1fddeddb1ad03016ca650f2b2e18a7f5dd/diff:/var/lib/docker/overlay2/8caadb592a9d04201e0a2fbb7a839bee7e5fea75f250ae661ce62dccc9a6439a/diff:/var/lib/docker/overlay2/7f97926f14f19d98861be9f0b5ea4321e0073fda2b4246addff2babb6782e2cd/diff:/var/lib/docker/overlay2/f97aaf1f58df74aae41d4989c1c2bcbe345d3d495d617cf92c16b7b082951549/diff:/var/lib/d
ocker/overlay2/c94c523d52a95a2bb2b25c99d07f159fd33fc79a55a85dc98e3fac825ce6ef30/diff:/var/lib/docker/overlay2/ac1d2ace06b5b555e3c895bd7786df4f8cee55531f1ea226fbf3c3d5582d3f29/diff:/var/lib/docker/overlay2/309519db7300c42aa8c54419ffa576f100db793a69a4e5fe3109ca02e14b8e50/diff:/var/lib/docker/overlay2/6f64c8673ef7362fcd7d55d0c19c0830a5c31c3841ae4ee69e75cd25a74a5048/diff:/var/lib/docker/overlay2/c08216237e476a12f55f3c1bafafd366c6a53eaf799965fd0ad88d097eedbcbf/diff:/var/lib/docker/overlay2/a870da1a92b7f0cb45baa7d95ac8cbd143a39082f8958524958f0d802a216e62/diff:/var/lib/docker/overlay2/88e7ec21e8af05b1015ba035f3860188f9bffa0e509a67ffa4267d7c7a76e972/diff:/var/lib/docker/overlay2/531e06b694b717bab79b354999b4f4912499b6421997a49bce26d9e82f9a3754/diff:/var/lib/docker/overlay2/3edc6927179b5caaefdff0badec36d6cbe41a8420a77adf24018241e26e6b603/diff:/var/lib/docker/overlay2/4b8a0abd7fe49430e3a1a29c98a3b3e3323c0fadc5b1311e26b612a06f8f76ae/diff:/var/lib/docker/overlay2/d3f5099daf876cc2cdbd27f60cd147c3c916036d04752ada6a17bd13210
f19e2/diff:/var/lib/docker/overlay2/2679ad4fb25b15275ad72b787f695d7e12948cab8b6f4ec2d6a61df2e0fcff7f/diff:/var/lib/docker/overlay2/e628f10038c6dee7c1f2a72d6abe7d1e8af2d38114365290918485e6ac95b313/diff:/var/lib/docker/overlay2/b07fb4ed2e44e92c06fd6500a3a666d60418960b7a1bcb8ebc7a6bb8d06dee11/diff:/var/lib/docker/overlay2/cfa3d2dac7804a585841fcf779597c2da336e152637f929ce76bed23499c23cc/diff:/var/lib/docker/overlay2/dced6c5820d9485d9b1f29a8f74920f7d509f513758a2816be4cb2c4f63bb242/diff:/var/lib/docker/overlay2/d8ecc913e96f9de3f718a3a063e0e5fa9e116c77aff94a32616bdb00bd7aac7f/diff:/var/lib/docker/overlay2/b6ac33633267400f1aa77b5b17c69fc7527f3d0ed4dfbd41fb003c94505ea310/diff:/var/lib/docker/overlay2/bbc9c4b2e5f00714c3142098bcdb97dedd2ca84f7a19455dfeeff55a95beffd4/diff:/var/lib/docker/overlay2/84944472d3651e44a20fd3fa72bed747da3567af8ebf5db89f6200326fa8ac7c/diff:/var/lib/docker/overlay2/a3fbfb702a1204c83d8fcd07aeba21a7139c0f98a642a9049754275c50b0d89b/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/970e93133d8fce1e1cc56e1ea44d6d0e67fcb8d32198cf7770f3898c71765d9b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/970e93133d8fce1e1cc56e1ea44d6d0e67fcb8d32198cf7770f3898c71765d9b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/970e93133d8fce1e1cc56e1ea44d6d0e67fcb8d32198cf7770f3898c71765d9b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220127031714-6703",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220127031714-6703/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220127031714-6703",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220127031714-6703",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220127031714-6703",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a1f42216df58be0f0ab6a0ffcecf614684c3782f227f54c0afcbc1fb8c8902e4",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/a1f42216df58",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220127031714-6703": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "91f0a9797496",
	                        "old-k8s-version-20220127031714-6703"
	                    ],
	                    "NetworkID": "75758b286fdf1dad635bb8a0f3fdd8dd7a6e676c24baa1342fb095c73ec27f87",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220127031714-6703 -n old-k8s-version-20220127031714-6703
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220127031714-6703 -n old-k8s-version-20220127031714-6703: exit status 7 (148.369385ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "old-k8s-version-20220127031714-6703" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (11.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:260: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-20220127031714-6703" does not exist
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20220127031714-6703
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20220127031714-6703:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73",
	        "Created": "2022-01-27T03:20:46.660393198Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 137,
	            "Error": "network 75758b286fdf1dad635bb8a0f3fdd8dd7a6e676c24baa1342fb095c73ec27f87 not found",
	            "StartedAt": "2022-01-27T03:20:47.166723014Z",
	            "FinishedAt": "2022-01-27T03:23:21.951827406Z"
	        },
	        "Image": "sha256:64d09634c60d2a75596bc705aa84bdc41f76fe47c5d9ee362550bffbdc256979",
	        "ResolvConfPath": "/var/lib/docker/containers/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73/hostname",
	        "HostsPath": "/var/lib/docker/containers/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73/hosts",
	        "LogPath": "/var/lib/docker/containers/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73-json.log",
	        "Name": "/old-k8s-version-20220127031714-6703",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220127031714-6703:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220127031714-6703",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/970e93133d8fce1e1cc56e1ea44d6d0e67fcb8d32198cf7770f3898c71765d9b-init/diff:/var/lib/docker/overlay2/16963715367ada0e39ac0ff75961ba8f4191279cc4ff8b9fd9fd28a2afd586a9/diff:/var/lib/docker/overlay2/cf3bc92115c85438ae77ffc60f5cb639c3968a2c050d4a65b31cd6fccfa41416/diff:/var/lib/docker/overlay2/218e751ff7f51ec2e7fadb35a20c5a8ea67c3b1fd9c070ad8816e2ab30fdd9e4/diff:/var/lib/docker/overlay2/bf9ba576be8949fd04e2902858d2da23ce5cd4ba75e8e532dcf79ee7b42b1908/diff:/var/lib/docker/overlay2/8a96f75d3963c09f4f38ba2dc19523e12f658d2ed81533b22cf9cd9057083153/diff:/var/lib/docker/overlay2/5ee8bc1f5021db1aa75bbf3fb3060fab633d7e9f5a618431b3dc5d8a40fbdd2f/diff:/var/lib/docker/overlay2/c2d261af5836940de0b3aff73ea5e1328576bb1a856847be0e7afced9977e7d6/diff:/var/lib/docker/overlay2/14d653eab7bf5140d2e59be7764608906b78b45fc00c1618ff27987917119e60/diff:/var/lib/docker/overlay2/98223f44c4eceaa2e2d3f5d58e0fedda224657dcca26fb238aa0adec9753395d/diff:/var/lib/docker/overlay2/281afd
7c879ee9ec2ed091125655b6d2471ebfba9d70f828f16d582ccd389d51/diff:/var/lib/docker/overlay2/051e1076d1ab90fd612cf6a1357f63367f2661aaf90bc242f4e0bc93ce78a37b/diff:/var/lib/docker/overlay2/f6a47c5c1b4fde4d27c0d8f0a804be3d88995b705c5683976ac96ab692e5d79c/diff:/var/lib/docker/overlay2/ea4bae1904f2e590771d839dff6c6213846ccc1cc1fbaadf7444396e4dcc2a17/diff:/var/lib/docker/overlay2/789935353e9a43b8382bc8319c2f7f598462bd83a26a3c822d3481f62ff3afe3/diff:/var/lib/docker/overlay2/8ba6338f7d22201db727697f81bb55a260d179c2853681b855cfe641eaaf5f44/diff:/var/lib/docker/overlay2/3afbc0354287f271a5fa34e1ce8c58bac282ecb529fae2e3fc083a2e11b9a108/diff:/var/lib/docker/overlay2/6d064d2ddd73c69a0ca655bd5c632e1fddeddb1ad03016ca650f2b2e18a7f5dd/diff:/var/lib/docker/overlay2/8caadb592a9d04201e0a2fbb7a839bee7e5fea75f250ae661ce62dccc9a6439a/diff:/var/lib/docker/overlay2/7f97926f14f19d98861be9f0b5ea4321e0073fda2b4246addff2babb6782e2cd/diff:/var/lib/docker/overlay2/f97aaf1f58df74aae41d4989c1c2bcbe345d3d495d617cf92c16b7b082951549/diff:/var/lib/d
ocker/overlay2/c94c523d52a95a2bb2b25c99d07f159fd33fc79a55a85dc98e3fac825ce6ef30/diff:/var/lib/docker/overlay2/ac1d2ace06b5b555e3c895bd7786df4f8cee55531f1ea226fbf3c3d5582d3f29/diff:/var/lib/docker/overlay2/309519db7300c42aa8c54419ffa576f100db793a69a4e5fe3109ca02e14b8e50/diff:/var/lib/docker/overlay2/6f64c8673ef7362fcd7d55d0c19c0830a5c31c3841ae4ee69e75cd25a74a5048/diff:/var/lib/docker/overlay2/c08216237e476a12f55f3c1bafafd366c6a53eaf799965fd0ad88d097eedbcbf/diff:/var/lib/docker/overlay2/a870da1a92b7f0cb45baa7d95ac8cbd143a39082f8958524958f0d802a216e62/diff:/var/lib/docker/overlay2/88e7ec21e8af05b1015ba035f3860188f9bffa0e509a67ffa4267d7c7a76e972/diff:/var/lib/docker/overlay2/531e06b694b717bab79b354999b4f4912499b6421997a49bce26d9e82f9a3754/diff:/var/lib/docker/overlay2/3edc6927179b5caaefdff0badec36d6cbe41a8420a77adf24018241e26e6b603/diff:/var/lib/docker/overlay2/4b8a0abd7fe49430e3a1a29c98a3b3e3323c0fadc5b1311e26b612a06f8f76ae/diff:/var/lib/docker/overlay2/d3f5099daf876cc2cdbd27f60cd147c3c916036d04752ada6a17bd13210
f19e2/diff:/var/lib/docker/overlay2/2679ad4fb25b15275ad72b787f695d7e12948cab8b6f4ec2d6a61df2e0fcff7f/diff:/var/lib/docker/overlay2/e628f10038c6dee7c1f2a72d6abe7d1e8af2d38114365290918485e6ac95b313/diff:/var/lib/docker/overlay2/b07fb4ed2e44e92c06fd6500a3a666d60418960b7a1bcb8ebc7a6bb8d06dee11/diff:/var/lib/docker/overlay2/cfa3d2dac7804a585841fcf779597c2da336e152637f929ce76bed23499c23cc/diff:/var/lib/docker/overlay2/dced6c5820d9485d9b1f29a8f74920f7d509f513758a2816be4cb2c4f63bb242/diff:/var/lib/docker/overlay2/d8ecc913e96f9de3f718a3a063e0e5fa9e116c77aff94a32616bdb00bd7aac7f/diff:/var/lib/docker/overlay2/b6ac33633267400f1aa77b5b17c69fc7527f3d0ed4dfbd41fb003c94505ea310/diff:/var/lib/docker/overlay2/bbc9c4b2e5f00714c3142098bcdb97dedd2ca84f7a19455dfeeff55a95beffd4/diff:/var/lib/docker/overlay2/84944472d3651e44a20fd3fa72bed747da3567af8ebf5db89f6200326fa8ac7c/diff:/var/lib/docker/overlay2/a3fbfb702a1204c83d8fcd07aeba21a7139c0f98a642a9049754275c50b0d89b/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/970e93133d8fce1e1cc56e1ea44d6d0e67fcb8d32198cf7770f3898c71765d9b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/970e93133d8fce1e1cc56e1ea44d6d0e67fcb8d32198cf7770f3898c71765d9b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/970e93133d8fce1e1cc56e1ea44d6d0e67fcb8d32198cf7770f3898c71765d9b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220127031714-6703",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220127031714-6703/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220127031714-6703",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220127031714-6703",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220127031714-6703",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a1f42216df58be0f0ab6a0ffcecf614684c3782f227f54c0afcbc1fb8c8902e4",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/a1f42216df58",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220127031714-6703": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "91f0a9797496",
	                        "old-k8s-version-20220127031714-6703"
	                    ],
	                    "NetworkID": "75758b286fdf1dad635bb8a0f3fdd8dd7a6e676c24baa1342fb095c73ec27f87",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220127031714-6703 -n old-k8s-version-20220127031714-6703
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220127031714-6703 -n old-k8s-version-20220127031714-6703: exit status 7 (159.692398ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "old-k8s-version-20220127031714-6703" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-20220127031714-6703" does not exist
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20220127031714-6703 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220127031714-6703 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (38.137285ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220127031714-6703" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:278: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-20220127031714-6703 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:282: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20220127031714-6703
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20220127031714-6703:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73",
	        "Created": "2022-01-27T03:20:46.660393198Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 137,
	            "Error": "network 75758b286fdf1dad635bb8a0f3fdd8dd7a6e676c24baa1342fb095c73ec27f87 not found",
	            "StartedAt": "2022-01-27T03:20:47.166723014Z",
	            "FinishedAt": "2022-01-27T03:23:21.951827406Z"
	        },
	        "Image": "sha256:64d09634c60d2a75596bc705aa84bdc41f76fe47c5d9ee362550bffbdc256979",
	        "ResolvConfPath": "/var/lib/docker/containers/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73/hostname",
	        "HostsPath": "/var/lib/docker/containers/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73/hosts",
	        "LogPath": "/var/lib/docker/containers/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73-json.log",
	        "Name": "/old-k8s-version-20220127031714-6703",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220127031714-6703:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220127031714-6703",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/970e93133d8fce1e1cc56e1ea44d6d0e67fcb8d32198cf7770f3898c71765d9b-init/diff:/var/lib/docker/overlay2/16963715367ada0e39ac0ff75961ba8f4191279cc4ff8b9fd9fd28a2afd586a9/diff:/var/lib/docker/overlay2/cf3bc92115c85438ae77ffc60f5cb639c3968a2c050d4a65b31cd6fccfa41416/diff:/var/lib/docker/overlay2/218e751ff7f51ec2e7fadb35a20c5a8ea67c3b1fd9c070ad8816e2ab30fdd9e4/diff:/var/lib/docker/overlay2/bf9ba576be8949fd04e2902858d2da23ce5cd4ba75e8e532dcf79ee7b42b1908/diff:/var/lib/docker/overlay2/8a96f75d3963c09f4f38ba2dc19523e12f658d2ed81533b22cf9cd9057083153/diff:/var/lib/docker/overlay2/5ee8bc1f5021db1aa75bbf3fb3060fab633d7e9f5a618431b3dc5d8a40fbdd2f/diff:/var/lib/docker/overlay2/c2d261af5836940de0b3aff73ea5e1328576bb1a856847be0e7afced9977e7d6/diff:/var/lib/docker/overlay2/14d653eab7bf5140d2e59be7764608906b78b45fc00c1618ff27987917119e60/diff:/var/lib/docker/overlay2/98223f44c4eceaa2e2d3f5d58e0fedda224657dcca26fb238aa0adec9753395d/diff:/var/lib/docker/overlay2/281afd
7c879ee9ec2ed091125655b6d2471ebfba9d70f828f16d582ccd389d51/diff:/var/lib/docker/overlay2/051e1076d1ab90fd612cf6a1357f63367f2661aaf90bc242f4e0bc93ce78a37b/diff:/var/lib/docker/overlay2/f6a47c5c1b4fde4d27c0d8f0a804be3d88995b705c5683976ac96ab692e5d79c/diff:/var/lib/docker/overlay2/ea4bae1904f2e590771d839dff6c6213846ccc1cc1fbaadf7444396e4dcc2a17/diff:/var/lib/docker/overlay2/789935353e9a43b8382bc8319c2f7f598462bd83a26a3c822d3481f62ff3afe3/diff:/var/lib/docker/overlay2/8ba6338f7d22201db727697f81bb55a260d179c2853681b855cfe641eaaf5f44/diff:/var/lib/docker/overlay2/3afbc0354287f271a5fa34e1ce8c58bac282ecb529fae2e3fc083a2e11b9a108/diff:/var/lib/docker/overlay2/6d064d2ddd73c69a0ca655bd5c632e1fddeddb1ad03016ca650f2b2e18a7f5dd/diff:/var/lib/docker/overlay2/8caadb592a9d04201e0a2fbb7a839bee7e5fea75f250ae661ce62dccc9a6439a/diff:/var/lib/docker/overlay2/7f97926f14f19d98861be9f0b5ea4321e0073fda2b4246addff2babb6782e2cd/diff:/var/lib/docker/overlay2/f97aaf1f58df74aae41d4989c1c2bcbe345d3d495d617cf92c16b7b082951549/diff:/var/lib/d
ocker/overlay2/c94c523d52a95a2bb2b25c99d07f159fd33fc79a55a85dc98e3fac825ce6ef30/diff:/var/lib/docker/overlay2/ac1d2ace06b5b555e3c895bd7786df4f8cee55531f1ea226fbf3c3d5582d3f29/diff:/var/lib/docker/overlay2/309519db7300c42aa8c54419ffa576f100db793a69a4e5fe3109ca02e14b8e50/diff:/var/lib/docker/overlay2/6f64c8673ef7362fcd7d55d0c19c0830a5c31c3841ae4ee69e75cd25a74a5048/diff:/var/lib/docker/overlay2/c08216237e476a12f55f3c1bafafd366c6a53eaf799965fd0ad88d097eedbcbf/diff:/var/lib/docker/overlay2/a870da1a92b7f0cb45baa7d95ac8cbd143a39082f8958524958f0d802a216e62/diff:/var/lib/docker/overlay2/88e7ec21e8af05b1015ba035f3860188f9bffa0e509a67ffa4267d7c7a76e972/diff:/var/lib/docker/overlay2/531e06b694b717bab79b354999b4f4912499b6421997a49bce26d9e82f9a3754/diff:/var/lib/docker/overlay2/3edc6927179b5caaefdff0badec36d6cbe41a8420a77adf24018241e26e6b603/diff:/var/lib/docker/overlay2/4b8a0abd7fe49430e3a1a29c98a3b3e3323c0fadc5b1311e26b612a06f8f76ae/diff:/var/lib/docker/overlay2/d3f5099daf876cc2cdbd27f60cd147c3c916036d04752ada6a17bd13210
f19e2/diff:/var/lib/docker/overlay2/2679ad4fb25b15275ad72b787f695d7e12948cab8b6f4ec2d6a61df2e0fcff7f/diff:/var/lib/docker/overlay2/e628f10038c6dee7c1f2a72d6abe7d1e8af2d38114365290918485e6ac95b313/diff:/var/lib/docker/overlay2/b07fb4ed2e44e92c06fd6500a3a666d60418960b7a1bcb8ebc7a6bb8d06dee11/diff:/var/lib/docker/overlay2/cfa3d2dac7804a585841fcf779597c2da336e152637f929ce76bed23499c23cc/diff:/var/lib/docker/overlay2/dced6c5820d9485d9b1f29a8f74920f7d509f513758a2816be4cb2c4f63bb242/diff:/var/lib/docker/overlay2/d8ecc913e96f9de3f718a3a063e0e5fa9e116c77aff94a32616bdb00bd7aac7f/diff:/var/lib/docker/overlay2/b6ac33633267400f1aa77b5b17c69fc7527f3d0ed4dfbd41fb003c94505ea310/diff:/var/lib/docker/overlay2/bbc9c4b2e5f00714c3142098bcdb97dedd2ca84f7a19455dfeeff55a95beffd4/diff:/var/lib/docker/overlay2/84944472d3651e44a20fd3fa72bed747da3567af8ebf5db89f6200326fa8ac7c/diff:/var/lib/docker/overlay2/a3fbfb702a1204c83d8fcd07aeba21a7139c0f98a642a9049754275c50b0d89b/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/970e93133d8fce1e1cc56e1ea44d6d0e67fcb8d32198cf7770f3898c71765d9b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/970e93133d8fce1e1cc56e1ea44d6d0e67fcb8d32198cf7770f3898c71765d9b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/970e93133d8fce1e1cc56e1ea44d6d0e67fcb8d32198cf7770f3898c71765d9b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220127031714-6703",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220127031714-6703/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220127031714-6703",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220127031714-6703",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220127031714-6703",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a1f42216df58be0f0ab6a0ffcecf614684c3782f227f54c0afcbc1fb8c8902e4",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/a1f42216df58",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220127031714-6703": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "91f0a9797496",
	                        "old-k8s-version-20220127031714-6703"
	                    ],
	                    "NetworkID": "75758b286fdf1dad635bb8a0f3fdd8dd7a6e676c24baa1342fb095c73ec27f87",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220127031714-6703 -n old-k8s-version-20220127031714-6703
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220127031714-6703 -n old-k8s-version-20220127031714-6703: exit status 7 (123.363995ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "old-k8s-version-20220127031714-6703" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20220127031714-6703 "sudo crictl images -o json"
start_stop_delete_test.go:289: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p old-k8s-version-20220127031714-6703 "sudo crictl images -o json": exit status 89 (120.895992ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-20220127031714-6703"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:289: failed tp get images inside minikube. args "out/minikube-linux-amd64 ssh -p old-k8s-version-20220127031714-6703 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:289: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p old-k8s-version-20220127031714-6703"
start_stop_delete_test.go:289: v1.16.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
- 	"kubernetesui/dashboard:v2.3.1",
- 	"kubernetesui/metrics-scraper:v1.0.7",
}
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20220127031714-6703
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20220127031714-6703:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73",
	        "Created": "2022-01-27T03:20:46.660393198Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 137,
	            "Error": "network 75758b286fdf1dad635bb8a0f3fdd8dd7a6e676c24baa1342fb095c73ec27f87 not found",
	            "StartedAt": "2022-01-27T03:20:47.166723014Z",
	            "FinishedAt": "2022-01-27T03:23:21.951827406Z"
	        },
	        "Image": "sha256:64d09634c60d2a75596bc705aa84bdc41f76fe47c5d9ee362550bffbdc256979",
	        "ResolvConfPath": "/var/lib/docker/containers/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73/hostname",
	        "HostsPath": "/var/lib/docker/containers/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73/hosts",
	        "LogPath": "/var/lib/docker/containers/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73-json.log",
	        "Name": "/old-k8s-version-20220127031714-6703",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220127031714-6703:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220127031714-6703",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/970e93133d8fce1e1cc56e1ea44d6d0e67fcb8d32198cf7770f3898c71765d9b-init/diff:/var/lib/docker/overlay2/16963715367ada0e39ac0ff75961ba8f4191279cc4ff8b9fd9fd28a2afd586a9/diff:/var/lib/docker/overlay2/cf3bc92115c85438ae77ffc60f5cb639c3968a2c050d4a65b31cd6fccfa41416/diff:/var/lib/docker/overlay2/218e751ff7f51ec2e7fadb35a20c5a8ea67c3b1fd9c070ad8816e2ab30fdd9e4/diff:/var/lib/docker/overlay2/bf9ba576be8949fd04e2902858d2da23ce5cd4ba75e8e532dcf79ee7b42b1908/diff:/var/lib/docker/overlay2/8a96f75d3963c09f4f38ba2dc19523e12f658d2ed81533b22cf9cd9057083153/diff:/var/lib/docker/overlay2/5ee8bc1f5021db1aa75bbf3fb3060fab633d7e9f5a618431b3dc5d8a40fbdd2f/diff:/var/lib/docker/overlay2/c2d261af5836940de0b3aff73ea5e1328576bb1a856847be0e7afced9977e7d6/diff:/var/lib/docker/overlay2/14d653eab7bf5140d2e59be7764608906b78b45fc00c1618ff27987917119e60/diff:/var/lib/docker/overlay2/98223f44c4eceaa2e2d3f5d58e0fedda224657dcca26fb238aa0adec9753395d/diff:/var/lib/docker/overlay2/281afd
7c879ee9ec2ed091125655b6d2471ebfba9d70f828f16d582ccd389d51/diff:/var/lib/docker/overlay2/051e1076d1ab90fd612cf6a1357f63367f2661aaf90bc242f4e0bc93ce78a37b/diff:/var/lib/docker/overlay2/f6a47c5c1b4fde4d27c0d8f0a804be3d88995b705c5683976ac96ab692e5d79c/diff:/var/lib/docker/overlay2/ea4bae1904f2e590771d839dff6c6213846ccc1cc1fbaadf7444396e4dcc2a17/diff:/var/lib/docker/overlay2/789935353e9a43b8382bc8319c2f7f598462bd83a26a3c822d3481f62ff3afe3/diff:/var/lib/docker/overlay2/8ba6338f7d22201db727697f81bb55a260d179c2853681b855cfe641eaaf5f44/diff:/var/lib/docker/overlay2/3afbc0354287f271a5fa34e1ce8c58bac282ecb529fae2e3fc083a2e11b9a108/diff:/var/lib/docker/overlay2/6d064d2ddd73c69a0ca655bd5c632e1fddeddb1ad03016ca650f2b2e18a7f5dd/diff:/var/lib/docker/overlay2/8caadb592a9d04201e0a2fbb7a839bee7e5fea75f250ae661ce62dccc9a6439a/diff:/var/lib/docker/overlay2/7f97926f14f19d98861be9f0b5ea4321e0073fda2b4246addff2babb6782e2cd/diff:/var/lib/docker/overlay2/f97aaf1f58df74aae41d4989c1c2bcbe345d3d495d617cf92c16b7b082951549/diff:/var/lib/d
ocker/overlay2/c94c523d52a95a2bb2b25c99d07f159fd33fc79a55a85dc98e3fac825ce6ef30/diff:/var/lib/docker/overlay2/ac1d2ace06b5b555e3c895bd7786df4f8cee55531f1ea226fbf3c3d5582d3f29/diff:/var/lib/docker/overlay2/309519db7300c42aa8c54419ffa576f100db793a69a4e5fe3109ca02e14b8e50/diff:/var/lib/docker/overlay2/6f64c8673ef7362fcd7d55d0c19c0830a5c31c3841ae4ee69e75cd25a74a5048/diff:/var/lib/docker/overlay2/c08216237e476a12f55f3c1bafafd366c6a53eaf799965fd0ad88d097eedbcbf/diff:/var/lib/docker/overlay2/a870da1a92b7f0cb45baa7d95ac8cbd143a39082f8958524958f0d802a216e62/diff:/var/lib/docker/overlay2/88e7ec21e8af05b1015ba035f3860188f9bffa0e509a67ffa4267d7c7a76e972/diff:/var/lib/docker/overlay2/531e06b694b717bab79b354999b4f4912499b6421997a49bce26d9e82f9a3754/diff:/var/lib/docker/overlay2/3edc6927179b5caaefdff0badec36d6cbe41a8420a77adf24018241e26e6b603/diff:/var/lib/docker/overlay2/4b8a0abd7fe49430e3a1a29c98a3b3e3323c0fadc5b1311e26b612a06f8f76ae/diff:/var/lib/docker/overlay2/d3f5099daf876cc2cdbd27f60cd147c3c916036d04752ada6a17bd13210
f19e2/diff:/var/lib/docker/overlay2/2679ad4fb25b15275ad72b787f695d7e12948cab8b6f4ec2d6a61df2e0fcff7f/diff:/var/lib/docker/overlay2/e628f10038c6dee7c1f2a72d6abe7d1e8af2d38114365290918485e6ac95b313/diff:/var/lib/docker/overlay2/b07fb4ed2e44e92c06fd6500a3a666d60418960b7a1bcb8ebc7a6bb8d06dee11/diff:/var/lib/docker/overlay2/cfa3d2dac7804a585841fcf779597c2da336e152637f929ce76bed23499c23cc/diff:/var/lib/docker/overlay2/dced6c5820d9485d9b1f29a8f74920f7d509f513758a2816be4cb2c4f63bb242/diff:/var/lib/docker/overlay2/d8ecc913e96f9de3f718a3a063e0e5fa9e116c77aff94a32616bdb00bd7aac7f/diff:/var/lib/docker/overlay2/b6ac33633267400f1aa77b5b17c69fc7527f3d0ed4dfbd41fb003c94505ea310/diff:/var/lib/docker/overlay2/bbc9c4b2e5f00714c3142098bcdb97dedd2ca84f7a19455dfeeff55a95beffd4/diff:/var/lib/docker/overlay2/84944472d3651e44a20fd3fa72bed747da3567af8ebf5db89f6200326fa8ac7c/diff:/var/lib/docker/overlay2/a3fbfb702a1204c83d8fcd07aeba21a7139c0f98a642a9049754275c50b0d89b/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/970e93133d8fce1e1cc56e1ea44d6d0e67fcb8d32198cf7770f3898c71765d9b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/970e93133d8fce1e1cc56e1ea44d6d0e67fcb8d32198cf7770f3898c71765d9b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/970e93133d8fce1e1cc56e1ea44d6d0e67fcb8d32198cf7770f3898c71765d9b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220127031714-6703",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220127031714-6703/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220127031714-6703",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220127031714-6703",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220127031714-6703",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a1f42216df58be0f0ab6a0ffcecf614684c3782f227f54c0afcbc1fb8c8902e4",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/a1f42216df58",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220127031714-6703": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "91f0a9797496",
	                        "old-k8s-version-20220127031714-6703"
	                    ],
	                    "NetworkID": "75758b286fdf1dad635bb8a0f3fdd8dd7a6e676c24baa1342fb095c73ec27f87",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220127031714-6703 -n old-k8s-version-20220127031714-6703
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220127031714-6703 -n old-k8s-version-20220127031714-6703: exit status 7 (136.906893ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "old-k8s-version-20220127031714-6703" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20220127031714-6703 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-20220127031714-6703 --alsologtostderr -v=1: exit status 89 (134.984579ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-20220127031714-6703"

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 03:23:34.775149  237789 out.go:297] Setting OutFile to fd 1 ...
	I0127 03:23:34.775249  237789 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0127 03:23:34.775258  237789 out.go:310] Setting ErrFile to fd 2...
	I0127 03:23:34.775262  237789 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0127 03:23:34.775387  237789 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/bin
	I0127 03:23:34.775589  237789 out.go:304] Setting JSON to false
	I0127 03:23:34.775609  237789 mustload.go:65] Loading cluster: old-k8s-version-20220127031714-6703
	I0127 03:23:34.775944  237789 config.go:176] Loaded profile config "old-k8s-version-20220127031714-6703": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0127 03:23:34.776305  237789 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220127031714-6703 --format={{.State.Status}}
	I0127 03:23:34.819416  237789 out.go:176] * The control plane node must be running for this command
	I0127 03:23:34.822287  237789 out.go:176]   To start a cluster, run: "minikube start -p old-k8s-version-20220127031714-6703"

                                                
                                                
** /stderr **
start_stop_delete_test.go:296: out/minikube-linux-amd64 pause -p old-k8s-version-20220127031714-6703 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20220127031714-6703
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20220127031714-6703:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73",
	        "Created": "2022-01-27T03:20:46.660393198Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 137,
	            "Error": "network 75758b286fdf1dad635bb8a0f3fdd8dd7a6e676c24baa1342fb095c73ec27f87 not found",
	            "StartedAt": "2022-01-27T03:20:47.166723014Z",
	            "FinishedAt": "2022-01-27T03:23:21.951827406Z"
	        },
	        "Image": "sha256:64d09634c60d2a75596bc705aa84bdc41f76fe47c5d9ee362550bffbdc256979",
	        "ResolvConfPath": "/var/lib/docker/containers/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73/hostname",
	        "HostsPath": "/var/lib/docker/containers/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73/hosts",
	        "LogPath": "/var/lib/docker/containers/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73-json.log",
	        "Name": "/old-k8s-version-20220127031714-6703",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220127031714-6703:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220127031714-6703",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/970e93133d8fce1e1cc56e1ea44d6d0e67fcb8d32198cf7770f3898c71765d9b-init/diff:/var/lib/docker/overlay2/16963715367ada0e39ac0ff75961ba8f4191279cc4ff8b9fd9fd28a2afd586a9/diff:/var/lib/docker/overlay2/cf3bc92115c85438ae77ffc60f5cb639c3968a2c050d4a65b31cd6fccfa41416/diff:/var/lib/docker/overlay2/218e751ff7f51ec2e7fadb35a20c5a8ea67c3b1fd9c070ad8816e2ab30fdd9e4/diff:/var/lib/docker/overlay2/bf9ba576be8949fd04e2902858d2da23ce5cd4ba75e8e532dcf79ee7b42b1908/diff:/var/lib/docker/overlay2/8a96f75d3963c09f4f38ba2dc19523e12f658d2ed81533b22cf9cd9057083153/diff:/var/lib/docker/overlay2/5ee8bc1f5021db1aa75bbf3fb3060fab633d7e9f5a618431b3dc5d8a40fbdd2f/diff:/var/lib/docker/overlay2/c2d261af5836940de0b3aff73ea5e1328576bb1a856847be0e7afced9977e7d6/diff:/var/lib/docker/overlay2/14d653eab7bf5140d2e59be7764608906b78b45fc00c1618ff27987917119e60/diff:/var/lib/docker/overlay2/98223f44c4eceaa2e2d3f5d58e0fedda224657dcca26fb238aa0adec9753395d/diff:/var/lib/docker/overlay2/281afd
7c879ee9ec2ed091125655b6d2471ebfba9d70f828f16d582ccd389d51/diff:/var/lib/docker/overlay2/051e1076d1ab90fd612cf6a1357f63367f2661aaf90bc242f4e0bc93ce78a37b/diff:/var/lib/docker/overlay2/f6a47c5c1b4fde4d27c0d8f0a804be3d88995b705c5683976ac96ab692e5d79c/diff:/var/lib/docker/overlay2/ea4bae1904f2e590771d839dff6c6213846ccc1cc1fbaadf7444396e4dcc2a17/diff:/var/lib/docker/overlay2/789935353e9a43b8382bc8319c2f7f598462bd83a26a3c822d3481f62ff3afe3/diff:/var/lib/docker/overlay2/8ba6338f7d22201db727697f81bb55a260d179c2853681b855cfe641eaaf5f44/diff:/var/lib/docker/overlay2/3afbc0354287f271a5fa34e1ce8c58bac282ecb529fae2e3fc083a2e11b9a108/diff:/var/lib/docker/overlay2/6d064d2ddd73c69a0ca655bd5c632e1fddeddb1ad03016ca650f2b2e18a7f5dd/diff:/var/lib/docker/overlay2/8caadb592a9d04201e0a2fbb7a839bee7e5fea75f250ae661ce62dccc9a6439a/diff:/var/lib/docker/overlay2/7f97926f14f19d98861be9f0b5ea4321e0073fda2b4246addff2babb6782e2cd/diff:/var/lib/docker/overlay2/f97aaf1f58df74aae41d4989c1c2bcbe345d3d495d617cf92c16b7b082951549/diff:/var/lib/d
ocker/overlay2/c94c523d52a95a2bb2b25c99d07f159fd33fc79a55a85dc98e3fac825ce6ef30/diff:/var/lib/docker/overlay2/ac1d2ace06b5b555e3c895bd7786df4f8cee55531f1ea226fbf3c3d5582d3f29/diff:/var/lib/docker/overlay2/309519db7300c42aa8c54419ffa576f100db793a69a4e5fe3109ca02e14b8e50/diff:/var/lib/docker/overlay2/6f64c8673ef7362fcd7d55d0c19c0830a5c31c3841ae4ee69e75cd25a74a5048/diff:/var/lib/docker/overlay2/c08216237e476a12f55f3c1bafafd366c6a53eaf799965fd0ad88d097eedbcbf/diff:/var/lib/docker/overlay2/a870da1a92b7f0cb45baa7d95ac8cbd143a39082f8958524958f0d802a216e62/diff:/var/lib/docker/overlay2/88e7ec21e8af05b1015ba035f3860188f9bffa0e509a67ffa4267d7c7a76e972/diff:/var/lib/docker/overlay2/531e06b694b717bab79b354999b4f4912499b6421997a49bce26d9e82f9a3754/diff:/var/lib/docker/overlay2/3edc6927179b5caaefdff0badec36d6cbe41a8420a77adf24018241e26e6b603/diff:/var/lib/docker/overlay2/4b8a0abd7fe49430e3a1a29c98a3b3e3323c0fadc5b1311e26b612a06f8f76ae/diff:/var/lib/docker/overlay2/d3f5099daf876cc2cdbd27f60cd147c3c916036d04752ada6a17bd13210
f19e2/diff:/var/lib/docker/overlay2/2679ad4fb25b15275ad72b787f695d7e12948cab8b6f4ec2d6a61df2e0fcff7f/diff:/var/lib/docker/overlay2/e628f10038c6dee7c1f2a72d6abe7d1e8af2d38114365290918485e6ac95b313/diff:/var/lib/docker/overlay2/b07fb4ed2e44e92c06fd6500a3a666d60418960b7a1bcb8ebc7a6bb8d06dee11/diff:/var/lib/docker/overlay2/cfa3d2dac7804a585841fcf779597c2da336e152637f929ce76bed23499c23cc/diff:/var/lib/docker/overlay2/dced6c5820d9485d9b1f29a8f74920f7d509f513758a2816be4cb2c4f63bb242/diff:/var/lib/docker/overlay2/d8ecc913e96f9de3f718a3a063e0e5fa9e116c77aff94a32616bdb00bd7aac7f/diff:/var/lib/docker/overlay2/b6ac33633267400f1aa77b5b17c69fc7527f3d0ed4dfbd41fb003c94505ea310/diff:/var/lib/docker/overlay2/bbc9c4b2e5f00714c3142098bcdb97dedd2ca84f7a19455dfeeff55a95beffd4/diff:/var/lib/docker/overlay2/84944472d3651e44a20fd3fa72bed747da3567af8ebf5db89f6200326fa8ac7c/diff:/var/lib/docker/overlay2/a3fbfb702a1204c83d8fcd07aeba21a7139c0f98a642a9049754275c50b0d89b/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/970e93133d8fce1e1cc56e1ea44d6d0e67fcb8d32198cf7770f3898c71765d9b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/970e93133d8fce1e1cc56e1ea44d6d0e67fcb8d32198cf7770f3898c71765d9b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/970e93133d8fce1e1cc56e1ea44d6d0e67fcb8d32198cf7770f3898c71765d9b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220127031714-6703",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220127031714-6703/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220127031714-6703",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220127031714-6703",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220127031714-6703",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a1f42216df58be0f0ab6a0ffcecf614684c3782f227f54c0afcbc1fb8c8902e4",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/a1f42216df58",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220127031714-6703": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "91f0a9797496",
	                        "old-k8s-version-20220127031714-6703"
	                    ],
	                    "NetworkID": "75758b286fdf1dad635bb8a0f3fdd8dd7a6e676c24baa1342fb095c73ec27f87",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220127031714-6703 -n old-k8s-version-20220127031714-6703
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220127031714-6703 -n old-k8s-version-20220127031714-6703: exit status 7 (123.147046ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "old-k8s-version-20220127031714-6703" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20220127031714-6703
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20220127031714-6703:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73",
	        "Created": "2022-01-27T03:20:46.660393198Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 137,
	            "Error": "network 75758b286fdf1dad635bb8a0f3fdd8dd7a6e676c24baa1342fb095c73ec27f87 not found",
	            "StartedAt": "2022-01-27T03:20:47.166723014Z",
	            "FinishedAt": "2022-01-27T03:23:21.951827406Z"
	        },
	        "Image": "sha256:64d09634c60d2a75596bc705aa84bdc41f76fe47c5d9ee362550bffbdc256979",
	        "ResolvConfPath": "/var/lib/docker/containers/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73/hostname",
	        "HostsPath": "/var/lib/docker/containers/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73/hosts",
	        "LogPath": "/var/lib/docker/containers/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73/91f0a979749657164e7ab60351dec9fa593dad8756bf29d0d8012670478b7a73-json.log",
	        "Name": "/old-k8s-version-20220127031714-6703",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220127031714-6703:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220127031714-6703",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/970e93133d8fce1e1cc56e1ea44d6d0e67fcb8d32198cf7770f3898c71765d9b-init/diff:/var/lib/docker/overlay2/16963715367ada0e39ac0ff75961ba8f4191279cc4ff8b9fd9fd28a2afd586a9/diff:/var/lib/docker/overlay2/cf3bc92115c85438ae77ffc60f5cb639c3968a2c050d4a65b31cd6fccfa41416/diff:/var/lib/docker/overlay2/218e751ff7f51ec2e7fadb35a20c5a8ea67c3b1fd9c070ad8816e2ab30fdd9e4/diff:/var/lib/docker/overlay2/bf9ba576be8949fd04e2902858d2da23ce5cd4ba75e8e532dcf79ee7b42b1908/diff:/var/lib/docker/overlay2/8a96f75d3963c09f4f38ba2dc19523e12f658d2ed81533b22cf9cd9057083153/diff:/var/lib/docker/overlay2/5ee8bc1f5021db1aa75bbf3fb3060fab633d7e9f5a618431b3dc5d8a40fbdd2f/diff:/var/lib/docker/overlay2/c2d261af5836940de0b3aff73ea5e1328576bb1a856847be0e7afced9977e7d6/diff:/var/lib/docker/overlay2/14d653eab7bf5140d2e59be7764608906b78b45fc00c1618ff27987917119e60/diff:/var/lib/docker/overlay2/98223f44c4eceaa2e2d3f5d58e0fedda224657dcca26fb238aa0adec9753395d/diff:/var/lib/docker/overlay2/281afd
7c879ee9ec2ed091125655b6d2471ebfba9d70f828f16d582ccd389d51/diff:/var/lib/docker/overlay2/051e1076d1ab90fd612cf6a1357f63367f2661aaf90bc242f4e0bc93ce78a37b/diff:/var/lib/docker/overlay2/f6a47c5c1b4fde4d27c0d8f0a804be3d88995b705c5683976ac96ab692e5d79c/diff:/var/lib/docker/overlay2/ea4bae1904f2e590771d839dff6c6213846ccc1cc1fbaadf7444396e4dcc2a17/diff:/var/lib/docker/overlay2/789935353e9a43b8382bc8319c2f7f598462bd83a26a3c822d3481f62ff3afe3/diff:/var/lib/docker/overlay2/8ba6338f7d22201db727697f81bb55a260d179c2853681b855cfe641eaaf5f44/diff:/var/lib/docker/overlay2/3afbc0354287f271a5fa34e1ce8c58bac282ecb529fae2e3fc083a2e11b9a108/diff:/var/lib/docker/overlay2/6d064d2ddd73c69a0ca655bd5c632e1fddeddb1ad03016ca650f2b2e18a7f5dd/diff:/var/lib/docker/overlay2/8caadb592a9d04201e0a2fbb7a839bee7e5fea75f250ae661ce62dccc9a6439a/diff:/var/lib/docker/overlay2/7f97926f14f19d98861be9f0b5ea4321e0073fda2b4246addff2babb6782e2cd/diff:/var/lib/docker/overlay2/f97aaf1f58df74aae41d4989c1c2bcbe345d3d495d617cf92c16b7b082951549/diff:/var/lib/d
ocker/overlay2/c94c523d52a95a2bb2b25c99d07f159fd33fc79a55a85dc98e3fac825ce6ef30/diff:/var/lib/docker/overlay2/ac1d2ace06b5b555e3c895bd7786df4f8cee55531f1ea226fbf3c3d5582d3f29/diff:/var/lib/docker/overlay2/309519db7300c42aa8c54419ffa576f100db793a69a4e5fe3109ca02e14b8e50/diff:/var/lib/docker/overlay2/6f64c8673ef7362fcd7d55d0c19c0830a5c31c3841ae4ee69e75cd25a74a5048/diff:/var/lib/docker/overlay2/c08216237e476a12f55f3c1bafafd366c6a53eaf799965fd0ad88d097eedbcbf/diff:/var/lib/docker/overlay2/a870da1a92b7f0cb45baa7d95ac8cbd143a39082f8958524958f0d802a216e62/diff:/var/lib/docker/overlay2/88e7ec21e8af05b1015ba035f3860188f9bffa0e509a67ffa4267d7c7a76e972/diff:/var/lib/docker/overlay2/531e06b694b717bab79b354999b4f4912499b6421997a49bce26d9e82f9a3754/diff:/var/lib/docker/overlay2/3edc6927179b5caaefdff0badec36d6cbe41a8420a77adf24018241e26e6b603/diff:/var/lib/docker/overlay2/4b8a0abd7fe49430e3a1a29c98a3b3e3323c0fadc5b1311e26b612a06f8f76ae/diff:/var/lib/docker/overlay2/d3f5099daf876cc2cdbd27f60cd147c3c916036d04752ada6a17bd13210
f19e2/diff:/var/lib/docker/overlay2/2679ad4fb25b15275ad72b787f695d7e12948cab8b6f4ec2d6a61df2e0fcff7f/diff:/var/lib/docker/overlay2/e628f10038c6dee7c1f2a72d6abe7d1e8af2d38114365290918485e6ac95b313/diff:/var/lib/docker/overlay2/b07fb4ed2e44e92c06fd6500a3a666d60418960b7a1bcb8ebc7a6bb8d06dee11/diff:/var/lib/docker/overlay2/cfa3d2dac7804a585841fcf779597c2da336e152637f929ce76bed23499c23cc/diff:/var/lib/docker/overlay2/dced6c5820d9485d9b1f29a8f74920f7d509f513758a2816be4cb2c4f63bb242/diff:/var/lib/docker/overlay2/d8ecc913e96f9de3f718a3a063e0e5fa9e116c77aff94a32616bdb00bd7aac7f/diff:/var/lib/docker/overlay2/b6ac33633267400f1aa77b5b17c69fc7527f3d0ed4dfbd41fb003c94505ea310/diff:/var/lib/docker/overlay2/bbc9c4b2e5f00714c3142098bcdb97dedd2ca84f7a19455dfeeff55a95beffd4/diff:/var/lib/docker/overlay2/84944472d3651e44a20fd3fa72bed747da3567af8ebf5db89f6200326fa8ac7c/diff:/var/lib/docker/overlay2/a3fbfb702a1204c83d8fcd07aeba21a7139c0f98a642a9049754275c50b0d89b/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/970e93133d8fce1e1cc56e1ea44d6d0e67fcb8d32198cf7770f3898c71765d9b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/970e93133d8fce1e1cc56e1ea44d6d0e67fcb8d32198cf7770f3898c71765d9b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/970e93133d8fce1e1cc56e1ea44d6d0e67fcb8d32198cf7770f3898c71765d9b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220127031714-6703",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220127031714-6703/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220127031714-6703",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220127031714-6703",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220127031714-6703",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a1f42216df58be0f0ab6a0ffcecf614684c3782f227f54c0afcbc1fb8c8902e4",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/a1f42216df58",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220127031714-6703": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "91f0a9797496",
	                        "old-k8s-version-20220127031714-6703"
	                    ],
	                    "NetworkID": "75758b286fdf1dad635bb8a0f3fdd8dd7a6e676c24baa1342fb095c73ec27f87",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220127031714-6703 -n old-k8s-version-20220127031714-6703
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220127031714-6703 -n old-k8s-version-20220127031714-6703: exit status 7 (103.276994ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "old-k8s-version-20220127031714-6703" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.45s)

                                                
                                    

Test pass (258/289)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 5.08
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.23.2/json-events 5.01
11 TestDownloadOnly/v1.23.2/preload-exists 0
15 TestDownloadOnly/v1.23.2/LogsDuration 0.06
17 TestDownloadOnly/v1.23.3-rc.0/json-events 4.83
18 TestDownloadOnly/v1.23.3-rc.0/preload-exists 0
22 TestDownloadOnly/v1.23.3-rc.0/LogsDuration 0.07
23 TestDownloadOnly/DeleteAll 0.31
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.19
25 TestDownloadOnlyKic 9.15
26 TestBinaryMirror 0.83
27 TestOffline 88.75
29 TestAddons/Setup 116.46
31 TestAddons/parallel/Registry 13.47
32 TestAddons/parallel/Ingress 47.82
33 TestAddons/parallel/MetricsServer 5.64
34 TestAddons/parallel/HelmTiller 13.17
36 TestAddons/parallel/CSI 44.65
38 TestAddons/serial/GCPAuth 40.24
39 TestAddons/StoppedEnableDisable 20.33
40 TestCertOptions 61.69
41 TestCertExpiration 271.44
43 TestForceSystemdFlag 51.99
44 TestForceSystemdEnv 76.19
45 TestKVMDriverInstallOrUpdate 4.03
49 TestErrorSpam/setup 41.26
50 TestErrorSpam/start 0.84
51 TestErrorSpam/status 1.1
52 TestErrorSpam/pause 2.34
53 TestErrorSpam/unpause 1.52
54 TestErrorSpam/stop 14.9
57 TestFunctional/serial/CopySyncFile 0
58 TestFunctional/serial/StartWithProxy 66.65
59 TestFunctional/serial/AuditLog 0
60 TestFunctional/serial/SoftStart 15.49
61 TestFunctional/serial/KubeContext 0.03
62 TestFunctional/serial/KubectlGetPods 0.16
65 TestFunctional/serial/CacheCmd/cache/add_remote 3.73
66 TestFunctional/serial/CacheCmd/cache/add_local 1.93
67 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.06
68 TestFunctional/serial/CacheCmd/cache/list 0.06
69 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.35
70 TestFunctional/serial/CacheCmd/cache/cache_reload 2.13
71 TestFunctional/serial/CacheCmd/cache/delete 0.11
72 TestFunctional/serial/MinikubeKubectlCmd 0.11
73 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
74 TestFunctional/serial/ExtraConfig 54.33
75 TestFunctional/serial/ComponentHealth 0.05
76 TestFunctional/serial/LogsCmd 0.87
77 TestFunctional/serial/LogsFileCmd 0.9
79 TestFunctional/parallel/ConfigCmd 0.43
80 TestFunctional/parallel/DashboardCmd 4.13
81 TestFunctional/parallel/DryRun 0.52
82 TestFunctional/parallel/InternationalLanguage 0.23
83 TestFunctional/parallel/StatusCmd 1.45
86 TestFunctional/parallel/ServiceCmd 14.14
87 TestFunctional/parallel/AddonsCmd 0.17
88 TestFunctional/parallel/PersistentVolumeClaim 34.42
90 TestFunctional/parallel/SSHCmd 0.84
91 TestFunctional/parallel/CpCmd 1.8
92 TestFunctional/parallel/MySQL 20.31
93 TestFunctional/parallel/FileSync 0.37
94 TestFunctional/parallel/CertSync 2.35
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.82
102 TestFunctional/parallel/Version/short 0.09
103 TestFunctional/parallel/Version/components 1.33
104 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
105 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.24
106 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
107 TestFunctional/parallel/ImageCommands/ImageListShort 0.55
108 TestFunctional/parallel/ImageCommands/ImageListTable 0.43
109 TestFunctional/parallel/ImageCommands/ImageListJson 0.37
110 TestFunctional/parallel/ImageCommands/ImageListYaml 0.43
111 TestFunctional/parallel/ImageCommands/ImageBuild 3.7
112 TestFunctional/parallel/ImageCommands/Setup 1.1
113 TestFunctional/parallel/ProfileCmd/profile_not_create 0.57
114 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.59
115 TestFunctional/parallel/ProfileCmd/profile_list 0.52
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.24
120 TestFunctional/parallel/ProfileCmd/profile_json_output 0.54
121 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.13
122 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.42
123 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
129 TestFunctional/parallel/MountCmd/any-port 6.28
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.13
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.58
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.13
134 TestFunctional/parallel/MountCmd/specific-port 2.26
135 TestFunctional/delete_addon-resizer_images 0.09
136 TestFunctional/delete_my-image_image 0.03
137 TestFunctional/delete_minikube_cached_images 0.03
140 TestIngressAddonLegacy/StartLegacyK8sCluster 79.54
142 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 14.62
143 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.38
144 TestIngressAddonLegacy/serial/ValidateIngressAddons 65.53
147 TestJSONOutput/start/Command 66.88
148 TestJSONOutput/start/Audit 0
150 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
151 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
153 TestJSONOutput/pause/Command 0.7
154 TestJSONOutput/pause/Audit 0
156 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
157 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
159 TestJSONOutput/unpause/Command 0.62
160 TestJSONOutput/unpause/Audit 0
162 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
163 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
165 TestJSONOutput/stop/Command 15.73
166 TestJSONOutput/stop/Audit 0
168 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
170 TestErrorJSONOutput 0.27
172 TestKicCustomNetwork/create_custom_network 35.06
173 TestKicCustomNetwork/use_default_bridge_network 28.31
174 TestKicExistingNetwork 28.82
175 TestMainNoArgs 0.06
178 TestMountStart/serial/StartWithMountFirst 5.01
179 TestMountStart/serial/VerifyMountFirst 0.32
180 TestMountStart/serial/StartWithMountSecond 4.63
181 TestMountStart/serial/VerifyMountSecond 0.32
182 TestMountStart/serial/DeleteFirst 1.85
183 TestMountStart/serial/VerifyMountPostDelete 0.33
184 TestMountStart/serial/Stop 1.26
185 TestMountStart/serial/RestartStopped 6.36
186 TestMountStart/serial/VerifyMountPostStop 0.32
189 TestMultiNode/serial/FreshStart2Nodes 104.89
190 TestMultiNode/serial/DeployApp2Nodes 3.4
191 TestMultiNode/serial/PingHostFrom2Pods 0.78
192 TestMultiNode/serial/AddNode 43.14
193 TestMultiNode/serial/ProfileList 0.36
194 TestMultiNode/serial/CopyFile 11.77
195 TestMultiNode/serial/StopNode 21.25
196 TestMultiNode/serial/StartAfterStop 36.11
197 TestMultiNode/serial/RestartKeepsNodes 190.26
198 TestMultiNode/serial/DeleteNode 24.15
199 TestMultiNode/serial/StopMultiNode 40.29
200 TestMultiNode/serial/RestartMultiNode 95.41
201 TestMultiNode/serial/ValidateNameConflict 45.04
206 TestPreload 151.16
208 TestScheduledStopUnix 118.35
211 TestInsufficientStorage 18.54
214 TestKubernetesUpgrade 211.03
215 TestMissingContainerUpgrade 151
217 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
218 TestNoKubernetes/serial/StartWithK8s 70.05
219 TestNoKubernetes/serial/StartWithStopK8s 18.57
220 TestNoKubernetes/serial/Start 9.58
221 TestNoKubernetes/serial/VerifyK8sNotRunning 0.42
222 TestNoKubernetes/serial/ProfileList 1.57
223 TestNoKubernetes/serial/Stop 1.29
224 TestNoKubernetes/serial/StartNoArgs 5.52
225 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.39
226 TestStoppedBinaryUpgrade/Setup 0.34
227 TestStoppedBinaryUpgrade/Upgrade 114.81
228 TestStoppedBinaryUpgrade/MinikubeLogs 0.91
237 TestPause/serial/Start 64.61
245 TestNetworkPlugins/group/false 0.72
249 TestPause/serial/SecondStartNoReconfiguration 16.61
250 TestPause/serial/Pause 1.2
251 TestPause/serial/VerifyStatus 0.49
252 TestPause/serial/Unpause 0.95
253 TestPause/serial/PauseAgain 5.57
254 TestPause/serial/DeletePaused 3.86
255 TestPause/serial/VerifyDeletedResources 2.3
257 TestStartStop/group/old-k8s-version/serial/FirstStart 334.62
259 TestStartStop/group/no-preload/serial/FirstStart 82.86
261 TestStartStop/group/embed-certs/serial/FirstStart 75.95
263 TestStartStop/group/default-k8s-different-port/serial/FirstStart 61.13
264 TestStartStop/group/embed-certs/serial/DeployApp 7.42
265 TestStartStop/group/no-preload/serial/DeployApp 8.43
266 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.67
267 TestStartStop/group/embed-certs/serial/Stop 20.19
268 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.58
269 TestStartStop/group/no-preload/serial/Stop 20.21
270 TestStartStop/group/default-k8s-different-port/serial/DeployApp 8.38
271 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
272 TestStartStop/group/embed-certs/serial/SecondStart 59.1
273 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.6
274 TestStartStop/group/default-k8s-different-port/serial/Stop 20.17
275 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.26
276 TestStartStop/group/no-preload/serial/SecondStart 90.61
277 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.19
278 TestStartStop/group/default-k8s-different-port/serial/SecondStart 58.49
279 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
280 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.06
281 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.37
282 TestStartStop/group/embed-certs/serial/Pause 3.03
284 TestStartStop/group/newest-cni/serial/FirstStart 59.71
285 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 5.01
286 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.06
287 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.35
288 TestStartStop/group/default-k8s-different-port/serial/Pause 3.33
289 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.01
290 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
291 TestNetworkPlugins/group/auto/Start 58.38
292 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.39
293 TestStartStop/group/no-preload/serial/Pause 3.6
294 TestNetworkPlugins/group/calico/Start 92.49
295 TestStartStop/group/newest-cni/serial/DeployApp 0
296 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.56
297 TestStartStop/group/newest-cni/serial/Stop 20.17
298 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
299 TestStartStop/group/newest-cni/serial/SecondStart 53.46
300 TestNetworkPlugins/group/auto/KubeletFlags 0.36
301 TestNetworkPlugins/group/auto/NetCatPod 9.27
302 TestNetworkPlugins/group/auto/DNS 0.14
303 TestNetworkPlugins/group/auto/Localhost 0.17
304 TestNetworkPlugins/group/auto/HairPin 0.12
305 TestNetworkPlugins/group/custom-weave/Start 69.47
306 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
307 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
308 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.39
309 TestStartStop/group/newest-cni/serial/Pause 3.43
310 TestNetworkPlugins/group/calico/ControllerPod 5.02
311 TestNetworkPlugins/group/calico/KubeletFlags 0.41
312 TestNetworkPlugins/group/calico/NetCatPod 10.41
313 TestNetworkPlugins/group/kindnet/Start 89.86
314 TestNetworkPlugins/group/calico/DNS 0.13
315 TestNetworkPlugins/group/calico/Localhost 0.13
316 TestNetworkPlugins/group/calico/HairPin 0.11
317 TestStartStop/group/old-k8s-version/serial/DeployApp 8.97
318 TestNetworkPlugins/group/enable-default-cni/Start 66.46
319 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 3.81
320 TestStartStop/group/old-k8s-version/serial/Stop 20.7
321 TestNetworkPlugins/group/custom-weave/KubeletFlags 0.34
322 TestNetworkPlugins/group/custom-weave/NetCatPod 10.21
323 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
325 TestNetworkPlugins/group/bridge/Start 58.2
330 TestNetworkPlugins/group/cilium/Start 89.96
331 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.37
332 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.21
333 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
334 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
335 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
336 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
337 TestNetworkPlugins/group/kindnet/KubeletFlags 0.36
338 TestNetworkPlugins/group/kindnet/NetCatPod 11.18
339 TestNetworkPlugins/group/bridge/KubeletFlags 0.38
340 TestNetworkPlugins/group/bridge/NetCatPod 9.17
341 TestNetworkPlugins/group/kindnet/DNS 0.14
342 TestNetworkPlugins/group/kindnet/Localhost 0.11
343 TestNetworkPlugins/group/kindnet/HairPin 0.12
344 TestNetworkPlugins/group/bridge/DNS 0.13
345 TestNetworkPlugins/group/bridge/Localhost 0.11
346 TestNetworkPlugins/group/bridge/HairPin 0.11
347 TestNetworkPlugins/group/cilium/ControllerPod 5.02
348 TestNetworkPlugins/group/cilium/KubeletFlags 0.35
349 TestNetworkPlugins/group/cilium/NetCatPod 9.79
350 TestNetworkPlugins/group/cilium/DNS 0.13
351 TestNetworkPlugins/group/cilium/Localhost 0.11
352 TestNetworkPlugins/group/cilium/HairPin 0.14
x
+
TestDownloadOnly/v1.16.0/json-events (5.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:73: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220127024155-6703 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:73: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220127024155-6703 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.080950998s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (5.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:175: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220127024155-6703
aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220127024155-6703: exit status 85 (66.810427ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/01/27 02:41:55
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.17.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 02:41:55.368555    6715 out.go:297] Setting OutFile to fd 1 ...
	I0127 02:41:55.368664    6715 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0127 02:41:55.368675    6715 out.go:310] Setting ErrFile to fd 2...
	I0127 02:41:55.368680    6715 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0127 02:41:55.368793    6715 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/bin
	W0127 02:41:55.368916    6715 root.go:293] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/config/config.json: no such file or directory
	I0127 02:41:55.369149    6715 out.go:304] Setting JSON to true
	I0127 02:41:55.369965    6715 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1470,"bootTime":1643249846,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1028-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 02:41:55.370034    6715 start.go:122] virtualization: kvm guest
	W0127 02:41:55.372804    6715 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball: no such file or directory
	I0127 02:41:55.372810    6715 notify.go:174] Checking for updates...
	I0127 02:41:55.374568    6715 driver.go:344] Setting default libvirt URI to qemu:///system
	I0127 02:41:55.410296    6715 docker.go:132] docker version: linux-20.10.12
	I0127 02:41:55.410382    6715 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0127 02:41:55.768351    6715 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:33 SystemTime:2022-01-27 02:41:55.435845894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1028-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668890624 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0127 02:41:55.768446    6715 docker.go:237] overlay module found
	I0127 02:41:55.770441    6715 start.go:281] selected driver: docker
	I0127 02:41:55.770460    6715 start.go:798] validating driver "docker" against <nil>
	I0127 02:41:55.770624    6715 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0127 02:41:55.856702    6715 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:33 SystemTime:2022-01-27 02:41:55.795956578 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1028-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668890624 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0127 02:41:55.856839    6715 start_flags.go:288] no existing cluster config was found, will generate one from the flags 
	I0127 02:41:55.857266    6715 start_flags.go:369] Using suggested 8000MB memory alloc based on sys=32109MB, container=32109MB
	I0127 02:41:55.857359    6715 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	I0127 02:41:55.857375    6715 start_flags.go:813] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 02:41:55.857391    6715 cni.go:93] Creating CNI manager for ""
	I0127 02:41:55.857403    6715 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0127 02:41:55.857420    6715 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0127 02:41:55.857429    6715 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0127 02:41:55.857434    6715 start_flags.go:297] Found "CNI" CNI - setting NetworkPlugin=cni
	I0127 02:41:55.857446    6715 start_flags.go:302] config:
	{Name:download-only-20220127024155-6703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220127024155-6703 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containe
rd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0127 02:41:55.859306    6715 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0127 02:41:55.860598    6715 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0127 02:41:55.860688    6715 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I0127 02:41:55.892468    6715 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I0127 02:41:55.892499    6715 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I0127 02:41:55.925148    6715 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.16.0/preloaded-images-k8s-v17-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0127 02:41:55.925172    6715 cache.go:57] Caching tarball of preloaded images
	I0127 02:41:55.925473    6715 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0127 02:41:55.927593    6715 preload.go:238] getting checksum for preloaded-images-k8s-v17-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0127 02:41:55.991016    6715 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.16.0/preloaded-images-k8s-v17-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:e4c073636c91a5b52b88b6bdda677b6c -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0127 02:41:58.757379    6715 preload.go:249] saving checksum for preloaded-images-k8s-v17-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0127 02:41:58.757472    6715 preload.go:256] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0127 02:41:59.799658    6715 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0127 02:41:59.800015    6715 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/download-only-20220127024155-6703/config.json ...
	I0127 02:41:59.800048    6715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/download-only-20220127024155-6703/config.json: {Name:mke26643674bfa9bce9d7ba04fb68115614bcd34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:41:59.800262    6715 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0127 02:41:59.800467    6715 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/linux/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220127024155-6703"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:176: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.2/json-events (5.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.2/json-events
aaa_download_only_test.go:73: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220127024155-6703 --force --alsologtostderr --kubernetes-version=v1.23.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:73: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220127024155-6703 --force --alsologtostderr --kubernetes-version=v1.23.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.013747022s)
--- PASS: TestDownloadOnly/v1.23.2/json-events (5.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.2/preload-exists
--- PASS: TestDownloadOnly/v1.23.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.2/LogsDuration
aaa_download_only_test.go:175: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220127024155-6703
aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220127024155-6703: exit status 85 (64.367101ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/01/27 02:42:00
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.17.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 02:42:00.517657    6860 out.go:297] Setting OutFile to fd 1 ...
	I0127 02:42:00.517739    6860 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0127 02:42:00.517750    6860 out.go:310] Setting ErrFile to fd 2...
	I0127 02:42:00.517754    6860 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0127 02:42:00.517852    6860 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/bin
	W0127 02:42:00.517962    6860 root.go:293] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/config/config.json: no such file or directory
	I0127 02:42:00.518060    6860 out.go:304] Setting JSON to true
	I0127 02:42:00.518747    6860 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1475,"bootTime":1643249846,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1028-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 02:42:00.518828    6860 start.go:122] virtualization: kvm guest
	I0127 02:42:00.520800    6860 notify.go:174] Checking for updates...
	I0127 02:42:00.522767    6860 config.go:176] Loaded profile config "download-only-20220127024155-6703": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W0127 02:42:00.522834    6860 start.go:706] api.Load failed for download-only-20220127024155-6703: filestore "download-only-20220127024155-6703": Docker machine "download-only-20220127024155-6703" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0127 02:42:00.522878    6860 driver.go:344] Setting default libvirt URI to qemu:///system
	W0127 02:42:00.522900    6860 start.go:706] api.Load failed for download-only-20220127024155-6703: filestore "download-only-20220127024155-6703": Docker machine "download-only-20220127024155-6703" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0127 02:42:00.557220    6860 docker.go:132] docker version: linux-20.10.12
	I0127 02:42:00.557331    6860 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0127 02:42:00.637559    6860 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:33 SystemTime:2022-01-27 02:42:00.581094209 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1028-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668890624 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0127 02:42:00.637643    6860 docker.go:237] overlay module found
	I0127 02:42:00.639252    6860 start.go:281] selected driver: docker
	I0127 02:42:00.639267    6860 start.go:798] validating driver "docker" against &{Name:download-only-20220127024155-6703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220127024155-6703 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false}
	I0127 02:42:00.639475    6860 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0127 02:42:00.719462    6860 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:33 SystemTime:2022-01-27 02:42:00.664191118 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1028-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668890624 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0127 02:42:00.719989    6860 cni.go:93] Creating CNI manager for ""
	I0127 02:42:00.720006    6860 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0127 02:42:00.720018    6860 start_flags.go:302] config:
	{Name:download-only-20220127024155-6703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:download-only-20220127024155-6703 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containe
rd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0127 02:42:00.721649    6860 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0127 02:42:00.723002    6860 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime containerd
	I0127 02:42:00.723035    6860 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I0127 02:42:00.751548    6860 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I0127 02:42:00.751575    6860 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I0127 02:42:00.784334    6860 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.2/preloaded-images-k8s-v17-v1.23.2-containerd-overlay2-amd64.tar.lz4
	I0127 02:42:00.784361    6860 cache.go:57] Caching tarball of preloaded images
	I0127 02:42:00.784668    6860 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime containerd
	I0127 02:42:00.786584    6860 preload.go:238] getting checksum for preloaded-images-k8s-v17-v1.23.2-containerd-overlay2-amd64.tar.lz4 ...
	I0127 02:42:00.852087    6860 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.2/preloaded-images-k8s-v17-v1.23.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:ee714cea50c87826e9b653692158a28b -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-containerd-overlay2-amd64.tar.lz4
	I0127 02:42:03.831082    6860 preload.go:249] saving checksum for preloaded-images-k8s-v17-v1.23.2-containerd-overlay2-amd64.tar.lz4 ...
	I0127 02:42:03.831189    6860 preload.go:256] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-containerd-overlay2-amd64.tar.lz4 ...
	I0127 02:42:04.963702    6860 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.2 on containerd
	I0127 02:42:04.963868    6860 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/download-only-20220127024155-6703/config.json ...
	I0127 02:42:04.964056    6860 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime containerd
	I0127 02:42:04.964251    6860 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.23.2/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/linux/v1.23.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220127024155-6703"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:176: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3-rc.0/json-events (4.83s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3-rc.0/json-events
aaa_download_only_test.go:73: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220127024155-6703 --force --alsologtostderr --kubernetes-version=v1.23.3-rc.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:73: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220127024155-6703 --force --alsologtostderr --kubernetes-version=v1.23.3-rc.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.828635594s)
--- PASS: TestDownloadOnly/v1.23.3-rc.0/json-events (4.83s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.23.3-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3-rc.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3-rc.0/LogsDuration
aaa_download_only_test.go:175: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220127024155-6703
aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220127024155-6703: exit status 85 (65.523011ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/01/27 02:42:05
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.17.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220127024155-6703"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:176: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.3-rc.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:193: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.31s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:205: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20220127024155-6703
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.19s)

                                                
                                    
x
+
TestDownloadOnlyKic (9.15s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:230: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-20220127024211-6703 --force --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:230: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-20220127024211-6703 --force --alsologtostderr --driver=docker  --container-runtime=containerd: (7.80662874s)
helpers_test.go:176: Cleaning up "download-docker-20220127024211-6703" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-20220127024211-6703
--- PASS: TestDownloadOnlyKic (9.15s)

                                                
                                    
x
+
TestBinaryMirror (0.83s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:316: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-20220127024220-6703 --alsologtostderr --binary-mirror http://127.0.0.1:36929 --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "binary-mirror-20220127024220-6703" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-20220127024220-6703
--- PASS: TestBinaryMirror (0.83s)

                                                
                                    
x
+
TestOffline (88.75s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-20220127031151-6703 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-20220127031151-6703 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd: (1m25.397780981s)
helpers_test.go:176: Cleaning up "offline-containerd-20220127031151-6703" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-20220127031151-6703

                                                
                                                
=== CONT  TestOffline
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-20220127031151-6703: (3.349362961s)
--- PASS: TestOffline (88.75s)

                                                
                                    
x
+
TestAddons/Setup (116.46s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-linux-amd64 start -p addons-20220127024221-6703 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-linux-amd64 start -p addons-20220127024221-6703 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (1m56.461004203s)
--- PASS: TestAddons/Setup (116.46s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:281: registry stabilized in 13.235211ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:343: "registry-xfxwv" [3ed12ef3-9f22-4b7e-8220-023ec78a9cc5] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008565623s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:343: "registry-proxy-rdn27" [52654ebf-1733-49c3-9fbd-570e2e55f777] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006875559s
addons_test.go:291: (dbg) Run:  kubectl --context addons-20220127024221-6703 delete po -l run=registry-test --now
addons_test.go:296: (dbg) Run:  kubectl --context addons-20220127024221-6703 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:296: (dbg) Done: kubectl --context addons-20220127024221-6703 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.586749674s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220127024221-6703 ip
2022/01/27 02:44:30 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:339: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220127024221-6703 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.47s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (47.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:163: (dbg) Run:  kubectl --context addons-20220127024221-6703 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:163: (dbg) Done: kubectl --context addons-20220127024221-6703 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (6.891578794s)
addons_test.go:183: (dbg) Run:  kubectl --context addons-20220127024221-6703 replace --force -f testdata/nginx-ingress-v1.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Done: kubectl --context addons-20220127024221-6703 replace --force -f testdata/nginx-ingress-v1.yaml: (1.44583048s)
addons_test.go:196: (dbg) Run:  kubectl --context addons-20220127024221-6703 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:201: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [797b46fc-3515-459b-a7de-4f2b6bebf99a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:343: "nginx" [797b46fc-3515-459b-a7de-4f2b6bebf99a] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:201: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.005479096s
addons_test.go:213: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220127024221-6703 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:237: (dbg) Run:  kubectl --context addons-20220127024221-6703 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220127024221-6703 ip
addons_test.go:248: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220127024221-6703 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220127024221-6703 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:262: (dbg) Done: out/minikube-linux-amd64 -p addons-20220127024221-6703 addons disable ingress --alsologtostderr -v=1: (28.753343459s)
--- PASS: TestAddons/parallel/Ingress (47.82s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.64s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:358: metrics-server stabilized in 13.741653ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:360: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:343: "metrics-server-6b76bd68b6-xfkgv" [b1bef846-fb38-4947-af6f-31e3669242df] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:360: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.008490448s
addons_test.go:366: (dbg) Run:  kubectl --context addons-20220127024221-6703 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220127024221-6703 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.64s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.17s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:407: tiller-deploy stabilized in 13.645028ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:409: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:343: "tiller-deploy-6d67d5465d-vkt6m" [7d49ba75-6963-417c-ab6a-d8d360873c4d] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:409: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.008895519s
addons_test.go:424: (dbg) Run:  kubectl --context addons-20220127024221-6703 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:424: (dbg) Done: kubectl --context addons-20220127024221-6703 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.635095211s)
addons_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220127024221-6703 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.17s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.65s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:512: csi-hostpath-driver pods stabilized in 14.27306ms
addons_test.go:515: (dbg) Run:  kubectl --context addons-20220127024221-6703 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:520: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20220127024221-6703 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:525: (dbg) Run:  kubectl --context addons-20220127024221-6703 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:530: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:343: "task-pv-pod" [ef3d543e-9f84-4c96-bc56-26438befa26e] Pending
helpers_test.go:343: "task-pv-pod" [ef3d543e-9f84-4c96-bc56-26438befa26e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [ef3d543e-9f84-4c96-bc56-26438befa26e] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:530: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.004999178s
addons_test.go:535: (dbg) Run:  kubectl --context addons-20220127024221-6703 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:540: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20220127024221-6703 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20220127024221-6703 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:545: (dbg) Run:  kubectl --context addons-20220127024221-6703 delete pod task-pv-pod
addons_test.go:551: (dbg) Run:  kubectl --context addons-20220127024221-6703 delete pvc hpvc
addons_test.go:557: (dbg) Run:  kubectl --context addons-20220127024221-6703 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:562: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20220127024221-6703 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20220127024221-6703 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:567: (dbg) Run:  kubectl --context addons-20220127024221-6703 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:572: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:343: "task-pv-pod-restore" [be38a344-4ec7-43fa-8ad0-4eade7791eba] Pending
helpers_test.go:343: "task-pv-pod-restore" [be38a344-4ec7-43fa-8ad0-4eade7791eba] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod-restore" [be38a344-4ec7-43fa-8ad0-4eade7791eba] Running
addons_test.go:572: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 16.005389251s
addons_test.go:577: (dbg) Run:  kubectl --context addons-20220127024221-6703 delete pod task-pv-pod-restore
addons_test.go:581: (dbg) Run:  kubectl --context addons-20220127024221-6703 delete pvc hpvc-restore
addons_test.go:585: (dbg) Run:  kubectl --context addons-20220127024221-6703 delete volumesnapshot new-snapshot-demo
addons_test.go:589: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220127024221-6703 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:589: (dbg) Done: out/minikube-linux-amd64 -p addons-20220127024221-6703 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.784245939s)
addons_test.go:593: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220127024221-6703 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (44.65s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (40.24s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:604: (dbg) Run:  kubectl --context addons-20220127024221-6703 create -f testdata/busybox.yaml
addons_test.go:610: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [79393fde-20ec-4e24-944f-c31792ea36e5] Pending
helpers_test.go:343: "busybox" [79393fde-20ec-4e24-944f-c31792ea36e5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [79393fde-20ec-4e24-944f-c31792ea36e5] Running
addons_test.go:610: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 7.007067681s
addons_test.go:616: (dbg) Run:  kubectl --context addons-20220127024221-6703 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:653: (dbg) Run:  kubectl --context addons-20220127024221-6703 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:666: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220127024221-6703 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:666: (dbg) Done: out/minikube-linux-amd64 -p addons-20220127024221-6703 addons disable gcp-auth --alsologtostderr -v=1: (6.056037623s)
addons_test.go:682: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220127024221-6703 addons enable gcp-auth
addons_test.go:682: (dbg) Done: out/minikube-linux-amd64 -p addons-20220127024221-6703 addons enable gcp-auth: (2.940199292s)
addons_test.go:688: (dbg) Run:  kubectl --context addons-20220127024221-6703 apply -f testdata/private-image.yaml
addons_test.go:695: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:343: "private-image-7f8587d5b7-bqt79" [3f7c557e-63ff-4d61-9ef2-4bfb5e88145a] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])
helpers_test.go:343: "private-image-7f8587d5b7-bqt79" [3f7c557e-63ff-4d61-9ef2-4bfb5e88145a] Running
addons_test.go:695: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image healthy within 13.00599301s
addons_test.go:701: (dbg) Run:  kubectl --context addons-20220127024221-6703 apply -f testdata/private-image-eu.yaml
addons_test.go:706: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:343: "private-image-eu-869dcfd8c7-sll2s" [04bca2e9-d921-411e-b25b-90882507ebd5] Pending
helpers_test.go:343: "private-image-eu-869dcfd8c7-sll2s" [04bca2e9-d921-411e-b25b-90882507ebd5] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])
helpers_test.go:343: "private-image-eu-869dcfd8c7-sll2s" [04bca2e9-d921-411e-b25b-90882507ebd5] Running
addons_test.go:706: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image-eu healthy within 10.007473015s
--- PASS: TestAddons/serial/GCPAuth (40.24s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (20.33s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:133: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-20220127024221-6703
addons_test.go:133: (dbg) Done: out/minikube-linux-amd64 stop -p addons-20220127024221-6703: (20.146812913s)
addons_test.go:137: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-20220127024221-6703
addons_test.go:141: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-20220127024221-6703
--- PASS: TestAddons/StoppedEnableDisable (20.33s)

                                                
                                    
x
+
TestCertOptions (61.69s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20220127031655-6703 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:50: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20220127031655-6703 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (54.491065838s)
cert_options_test.go:61: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20220127031655-6703 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:89: (dbg) Run:  kubectl --context cert-options-20220127031655-6703 config view
cert_options_test.go:101: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-20220127031655-6703 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-20220127031655-6703" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20220127031655-6703
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20220127031655-6703: (6.476997464s)
--- PASS: TestCertOptions (61.69s)

                                                
                                    
x
+
TestCertExpiration (271.44s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:124: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220127031151-6703 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:124: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220127031151-6703 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (1m10.65086788s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220127031151-6703 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220127031151-6703 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (16.982754476s)
helpers_test.go:176: Cleaning up "cert-expiration-20220127031151-6703" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-20220127031151-6703
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-20220127031151-6703: (3.801858524s)
--- PASS: TestCertExpiration (271.44s)

                                                
                                    
x
+
TestForceSystemdFlag (51.99s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20220127031624-6703 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20220127031624-6703 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (48.514073061s)
docker_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-20220127031624-6703 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-flag-20220127031624-6703" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20220127031624-6703

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20220127031624-6703: (3.038218714s)
--- PASS: TestForceSystemdFlag (51.99s)

                                                
                                    
x
+
TestForceSystemdEnv (76.19s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:151: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20220127031151-6703 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:151: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20220127031151-6703 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m13.153436174s)
docker_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-20220127031151-6703 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-env-20220127031151-6703" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20220127031151-6703
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20220127031151-6703: (2.683260419s)
--- PASS: TestForceSystemdEnv (76.19s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.03s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.03s)

                                                
                                    
x
+
TestErrorSpam/setup (41.26s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:79: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20220127024617-6703 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220127024617-6703 --driver=docker  --container-runtime=containerd
error_spam_test.go:79: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20220127024617-6703 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220127024617-6703 --driver=docker  --container-runtime=containerd: (41.264648359s)
error_spam_test.go:89: acceptable stderr: "! Your cgroup does not allow setting memory."
--- PASS: TestErrorSpam/setup (41.26s)

                                                
                                    
x
+
TestErrorSpam/start (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:214: Cleaning up 1 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220127024617-6703 --log_dir /tmp/nospam-20220127024617-6703 start --dry-run
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220127024617-6703 --log_dir /tmp/nospam-20220127024617-6703 start --dry-run
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220127024617-6703 --log_dir /tmp/nospam-20220127024617-6703 start --dry-run
--- PASS: TestErrorSpam/start (0.84s)

                                                
                                    
x
+
TestErrorSpam/status (1.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220127024617-6703 --log_dir /tmp/nospam-20220127024617-6703 status
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220127024617-6703 --log_dir /tmp/nospam-20220127024617-6703 status
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220127024617-6703 --log_dir /tmp/nospam-20220127024617-6703 status
--- PASS: TestErrorSpam/status (1.10s)

                                                
                                    
x
+
TestErrorSpam/pause (2.34s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220127024617-6703 --log_dir /tmp/nospam-20220127024617-6703 pause
error_spam_test.go:157: (dbg) Done: out/minikube-linux-amd64 -p nospam-20220127024617-6703 --log_dir /tmp/nospam-20220127024617-6703 pause: (1.341979064s)
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220127024617-6703 --log_dir /tmp/nospam-20220127024617-6703 pause
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220127024617-6703 --log_dir /tmp/nospam-20220127024617-6703 pause
--- PASS: TestErrorSpam/pause (2.34s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220127024617-6703 --log_dir /tmp/nospam-20220127024617-6703 unpause
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220127024617-6703 --log_dir /tmp/nospam-20220127024617-6703 unpause
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220127024617-6703 --log_dir /tmp/nospam-20220127024617-6703 unpause
--- PASS: TestErrorSpam/unpause (1.52s)

                                                
                                    
x
+
TestErrorSpam/stop (14.9s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220127024617-6703 --log_dir /tmp/nospam-20220127024617-6703 stop
error_spam_test.go:157: (dbg) Done: out/minikube-linux-amd64 -p nospam-20220127024617-6703 --log_dir /tmp/nospam-20220127024617-6703 stop: (14.65214216s)
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220127024617-6703 --log_dir /tmp/nospam-20220127024617-6703 stop
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220127024617-6703 --log_dir /tmp/nospam-20220127024617-6703 stop
--- PASS: TestErrorSpam/stop (14.90s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1707: local sync path: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/test/nested/copy/6703/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (66.65s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2089: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220127024724-6703 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2089: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220127024724-6703 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m6.646375324s)
--- PASS: TestFunctional/serial/StartWithProxy (66.65s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (15.49s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220127024724-6703 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220127024724-6703 --alsologtostderr -v=8: (15.487328398s)
functional_test.go:659: soft start took 15.487950929s for "functional-20220127024724-6703" cluster.
--- PASS: TestFunctional/serial/SoftStart (15.49s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-20220127024724-6703 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 cache add k8s.gcr.io/pause:3.1
functional_test.go:1042: (dbg) Done: out/minikube-linux-amd64 -p functional-20220127024724-6703 cache add k8s.gcr.io/pause:3.1: (1.303463016s)
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 cache add k8s.gcr.io/pause:3.3
functional_test.go:1042: (dbg) Done: out/minikube-linux-amd64 -p functional-20220127024724-6703 cache add k8s.gcr.io/pause:3.3: (1.356286705s)
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 cache add k8s.gcr.io/pause:latest
functional_test.go:1042: (dbg) Done: out/minikube-linux-amd64 -p functional-20220127024724-6703 cache add k8s.gcr.io/pause:latest: (1.069127335s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220127024724-6703 /tmp/functional-20220127024724-67031266386237
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 cache add minikube-local-cache-test:functional-20220127024724-6703
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-20220127024724-6703 cache add minikube-local-cache-test:functional-20220127024724-6703: (1.666171586s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 cache delete minikube-local-cache-test:functional-20220127024724-6703
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220127024724-6703
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (349.046538ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-20220127024724-6703 cache reload: (1.07887734s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 kubectl -- --context functional-20220127024724-6703 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-20220127024724-6703 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (54.33s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220127024724-6703 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0127 02:49:17.578169    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220127024221-6703/client.crt: no such file or directory
E0127 02:49:17.583999    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220127024221-6703/client.crt: no such file or directory
E0127 02:49:17.594208    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220127024221-6703/client.crt: no such file or directory
E0127 02:49:17.614456    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220127024221-6703/client.crt: no such file or directory
E0127 02:49:17.654683    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220127024221-6703/client.crt: no such file or directory
E0127 02:49:17.734979    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220127024221-6703/client.crt: no such file or directory
E0127 02:49:17.895318    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220127024221-6703/client.crt: no such file or directory
E0127 02:49:18.215843    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220127024221-6703/client.crt: no such file or directory
E0127 02:49:18.856634    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220127024221-6703/client.crt: no such file or directory
E0127 02:49:20.137078    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220127024221-6703/client.crt: no such file or directory
E0127 02:49:22.697296    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220127024221-6703/client.crt: no such file or directory
E0127 02:49:27.818458    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220127024221-6703/client.crt: no such file or directory
E0127 02:49:38.058870    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220127024221-6703/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220127024724-6703 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (54.327874117s)
functional_test.go:757: restart took 54.327991696s for "functional-20220127024724-6703" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (54.33s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:811: (dbg) Run:  kubectl --context functional-20220127024724-6703 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:826: etcd phase: Running
functional_test.go:836: etcd status: Ready
functional_test.go:826: kube-apiserver phase: Running
functional_test.go:836: kube-apiserver status: Ready
functional_test.go:826: kube-controller-manager phase: Running
functional_test.go:836: kube-controller-manager status: Ready
functional_test.go:826: kube-scheduler phase: Running
functional_test.go:836: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.87s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 logs
--- PASS: TestFunctional/serial/LogsCmd (0.87s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.9s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1249: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 logs --file /tmp/functional-20220127024724-67032255861942/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220127024724-6703 config get cpus: exit status 14 (67.764433ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220127024724-6703 config get cpus: exit status 14 (67.876268ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (4.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:906: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220127024724-6703 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:911: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220127024724-6703 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to kill pid 42833: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (4.13s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:971: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220127024724-6703 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:971: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220127024724-6703 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (217.04481ms)

                                                
                                                
-- stdout --
	* [functional-20220127024724-6703] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13251
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 02:50:13.176009   41889 out.go:297] Setting OutFile to fd 1 ...
	I0127 02:50:13.176103   41889 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0127 02:50:13.176115   41889 out.go:310] Setting ErrFile to fd 2...
	I0127 02:50:13.176120   41889 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0127 02:50:13.176252   41889 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/bin
	I0127 02:50:13.176549   41889 out.go:304] Setting JSON to false
	I0127 02:50:13.177996   41889 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1967,"bootTime":1643249846,"procs":567,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1028-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 02:50:13.178087   41889 start.go:122] virtualization: kvm guest
	I0127 02:50:13.180575   41889 out.go:176] * [functional-20220127024724-6703] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	I0127 02:50:13.182420   41889 out.go:176]   - MINIKUBE_LOCATION=13251
	I0127 02:50:13.184078   41889 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 02:50:13.185506   41889 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
	I0127 02:50:13.186770   41889 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube
	I0127 02:50:13.188191   41889 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 02:50:13.188608   41889 config.go:176] Loaded profile config "functional-20220127024724-6703": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.2
	I0127 02:50:13.188961   41889 driver.go:344] Setting default libvirt URI to qemu:///system
	I0127 02:50:13.229109   41889 docker.go:132] docker version: linux-20.10.12
	I0127 02:50:13.229203   41889 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0127 02:50:13.322924   41889 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:12 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:39 SystemTime:2022-01-27 02:50:13.259907605 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1028-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668890624 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0127 02:50:13.323020   41889 docker.go:237] overlay module found
	I0127 02:50:13.325230   41889 out.go:176] * Using the docker driver based on existing profile
	I0127 02:50:13.325254   41889 start.go:281] selected driver: docker
	I0127 02:50:13.325259   41889 start.go:798] validating driver "docker" against &{Name:functional-20220127024724-6703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:functional-20220127024724-6703 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0127 02:50:13.325353   41889 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0127 02:50:13.325380   41889 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0127 02:50:13.325397   41889 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0127 02:50:13.326994   41889 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0127 02:50:13.328974   41889 out.go:176] 
	W0127 02:50:13.329072   41889 out.go:241] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0127 02:50:13.330491   41889 out.go:176] 

                                                
                                                
** /stderr **
functional_test.go:988: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220127024724-6703 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220127024724-6703 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220127024724-6703 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (229.568367ms)

                                                
                                                
-- stdout --
	* [functional-20220127024724-6703] minikube v1.25.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13251
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Utilisation du pilote docker basé sur le profil existant
	  - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 02:50:05.292619   38465 out.go:297] Setting OutFile to fd 1 ...
	I0127 02:50:05.292690   38465 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0127 02:50:05.292711   38465 out.go:310] Setting ErrFile to fd 2...
	I0127 02:50:05.292715   38465 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0127 02:50:05.292849   38465 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/bin
	I0127 02:50:05.293086   38465 out.go:304] Setting JSON to false
	I0127 02:50:05.294218   38465 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1960,"bootTime":1643249846,"procs":555,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1028-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 02:50:05.294298   38465 start.go:122] virtualization: kvm guest
	I0127 02:50:05.296524   38465 out.go:176] * [functional-20220127024724-6703] minikube v1.25.1 sur Ubuntu 20.04 (kvm/amd64)
	I0127 02:50:05.298410   38465 out.go:176]   - MINIKUBE_LOCATION=13251
	I0127 02:50:05.299658   38465 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 02:50:05.300898   38465 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
	I0127 02:50:05.302345   38465 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube
	I0127 02:50:05.303680   38465 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 02:50:05.304220   38465 config.go:176] Loaded profile config "functional-20220127024724-6703": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.2
	I0127 02:50:05.304830   38465 driver.go:344] Setting default libvirt URI to qemu:///system
	I0127 02:50:05.351179   38465 docker.go:132] docker version: linux-20.10.12
	I0127 02:50:05.351260   38465 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0127 02:50:05.448070   38465 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:12 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:41 SystemTime:2022-01-27 02:50:05.379691695 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1028-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668890624 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0127 02:50:05.448196   38465 docker.go:237] overlay module found
	I0127 02:50:05.452122   38465 out.go:176] * Utilisation du pilote docker basé sur le profil existant
	I0127 02:50:05.452151   38465 start.go:281] selected driver: docker
	I0127 02:50:05.452169   38465 start.go:798] validating driver "docker" against &{Name:functional-20220127024724-6703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:functional-20220127024724-6703 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0127 02:50:05.452322   38465 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0127 02:50:05.452358   38465 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0127 02:50:05.452381   38465 out.go:241] ! Votre groupe de contrôle ne permet pas de définir la mémoire.
	! Votre groupe de contrôle ne permet pas de définir la mémoire.
	I0127 02:50:05.455131   38465 out.go:176]   - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0127 02:50:05.457462   38465 out.go:176] 
	W0127 02:50:05.457575   38465 out.go:241] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0127 02:50:05.459079   38465 out.go:176] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:855: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:861: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:873: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (14.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1431: (dbg) Run:  kubectl --context functional-20220127024724-6703 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1437: (dbg) Run:  kubectl --context functional-20220127024724-6703 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1442: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:343: "hello-node-54fbb85-849j4" [94a8c697-a203-4e97-9665-5790519d0bb4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:343: "hello-node-54fbb85-849j4" [94a8c697-a203-4e97-9665-5790519d0bb4] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1442: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 12.005590073s
functional_test.go:1447: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1469: found endpoint: https://192.168.49.2:31336
functional_test.go:1480: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1495: found endpoint for hello-node: http://192.168.49.2:31336
functional_test.go:1506: Attempting to fetch http://192.168.49.2:31336 ...
functional_test.go:1526: http://192.168.49.2:31336: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-54fbb85-849j4

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31336
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmd (14.14s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1541: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 addons list
functional_test.go:1553: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (34.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:343: "storage-provisioner" [a97de765-0c70-495b-9995-6e853fb7dd0d] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.008095151s
functional_test_pvc_test.go:50: (dbg) Run:  kubectl --context functional-20220127024724-6703 get storageclass -o=json
functional_test_pvc_test.go:70: (dbg) Run:  kubectl --context functional-20220127024724-6703 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20220127024724-6703 get pvc myclaim -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20220127024724-6703 get pvc myclaim -o=json
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20220127024724-6703 apply -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [1d753711-8c89-4d2a-8ccb-8f0b5fa0c83e] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [1d753711-8c89-4d2a-8ccb-8f0b5fa0c83e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [1d753711-8c89-4d2a-8ccb-8f0b5fa0c83e] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.007134613s
functional_test_pvc_test.go:101: (dbg) Run:  kubectl --context functional-20220127024724-6703 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:107: (dbg) Run:  kubectl --context functional-20220127024724-6703 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20220127024724-6703 apply -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [aa9d8690-9898-429a-b4b8-e76fd2d31a38] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [aa9d8690-9898-429a-b4b8-e76fd2d31a38] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [aa9d8690-9898-429a-b4b8-e76fd2d31a38] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.010543587s
functional_test_pvc_test.go:115: (dbg) Run:  kubectl --context functional-20220127024724-6703 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (34.42s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1593: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh -n functional-20220127024724-6703 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 cp functional-20220127024724-6703:/home/docker/cp-test.txt /tmp/mk_test1212299042/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh -n functional-20220127024724-6703 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1645: (dbg) Run:  kubectl --context functional-20220127024724-6703 replace --force -f testdata/mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1651: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:343: "mysql-b87c45988-gf45g" [755de8d9-68ee-4d28-a4cb-b0903ee7cc69] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-b87c45988-gf45g" [755de8d9-68ee-4d28-a4cb-b0903ee7cc69] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-b87c45988-gf45g" [755de8d9-68ee-4d28-a4cb-b0903ee7cc69] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1651: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.014458891s
functional_test.go:1659: (dbg) Run:  kubectl --context functional-20220127024724-6703 exec mysql-b87c45988-gf45g -- mysql -ppassword -e "show databases;"
functional_test.go:1659: (dbg) Non-zero exit: kubectl --context functional-20220127024724-6703 exec mysql-b87c45988-gf45g -- mysql -ppassword -e "show databases;": exit status 1 (116.22277ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1659: (dbg) Run:  kubectl --context functional-20220127024724-6703 exec mysql-b87c45988-gf45g -- mysql -ppassword -e "show databases;"
functional_test.go:1659: (dbg) Non-zero exit: kubectl --context functional-20220127024724-6703 exec mysql-b87c45988-gf45g -- mysql -ppassword -e "show databases;": exit status 1 (114.519308ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1659: (dbg) Run:  kubectl --context functional-20220127024724-6703 exec mysql-b87c45988-gf45g -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.31s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1781: Checking for existence of /etc/test/nested/copy/6703/hosts within VM
functional_test.go:1783: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh "sudo cat /etc/test/nested/copy/6703/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1788: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1824: Checking for existence of /etc/ssl/certs/6703.pem within VM
functional_test.go:1825: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh "sudo cat /etc/ssl/certs/6703.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1824: Checking for existence of /usr/share/ca-certificates/6703.pem within VM
functional_test.go:1825: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh "sudo cat /usr/share/ca-certificates/6703.pem"
functional_test.go:1824: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1825: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1851: Checking for existence of /etc/ssl/certs/67032.pem within VM
functional_test.go:1852: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh "sudo cat /etc/ssl/certs/67032.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1851: Checking for existence of /usr/share/ca-certificates/67032.pem within VM
functional_test.go:1852: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh "sudo cat /usr/share/ca-certificates/67032.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1851: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1852: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.35s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-20220127024724-6703 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1879: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh "sudo systemctl is-active docker"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1879: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh "sudo systemctl is-active docker": exit status 1 (405.102025ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:1879: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1879: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh "sudo systemctl is-active crio": exit status 1 (414.493239ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2111: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2125: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2125: (dbg) Done: out/minikube-linux-amd64 -p functional-20220127024724-6703 version -o=json --components: (1.325240906s)
--- PASS: TestFunctional/parallel/Version/components (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1971: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1971: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1971: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 image ls --format short

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220127024724-6703 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.23.2
k8s.gcr.io/kube-proxy:v1.23.2
k8s.gcr.io/kube-controller-manager:v1.23.2
k8s.gcr.io/kube-apiserver:v1.23.2
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20220127024724-6703
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-20220127024724-6703
docker.io/kubernetesui/metrics-scraper:v1.0.7
docker.io/kubernetesui/dashboard:v2.3.1
docker.io/kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 image ls --format table
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220127024724-6703 image ls --format table:
|---------------------------------------------|--------------------------------|---------------|--------|
|                    Image                    |              Tag               |   Image ID    |  Size  |
|---------------------------------------------|--------------------------------|---------------|--------|
| docker.io/kubernetesui/metrics-scraper      | v1.0.7                         | sha256:7801cf | 15MB   |
| docker.io/library/minikube-local-cache-test | functional-20220127024724-6703 | sha256:4ad459 | 1.74kB |
| docker.io/library/nginx                     | alpine                         | sha256:bef258 | 10.2MB |
| gcr.io/google-containers/addon-resizer      | functional-20220127024724-6703 | sha256:ffd4cf | 10.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                             | sha256:6e38f4 | 9.06MB |
| k8s.gcr.io/kube-proxy                       | v1.23.2                        | sha256:d922ca | 39.3MB |
| k8s.gcr.io/pause                            | 3.1                            | sha256:da86e6 | 315kB  |
| k8s.gcr.io/pause                            | 3.3                            | sha256:0184c1 | 298kB  |
| k8s.gcr.io/pause                            | 3.6                            | sha256:6270bb | 302kB  |
| docker.io/library/nginx                     | latest                         | sha256:c316d5 | 56.7MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                   | sha256:56cc51 | 2.4MB  |
| k8s.gcr.io/echoserver                       | 1.8                            | sha256:82e4c8 | 46.2MB |
| k8s.gcr.io/kube-controller-manager          | v1.23.2                        | sha256:478363 | 30.2MB |
| k8s.gcr.io/pause                            | latest                         | sha256:350b16 | 72.3kB |
| docker.io/kindest/kindnetd                  | v20210326-1e038dc5             | sha256:6de166 | 54MB   |
| k8s.gcr.io/etcd                             | 3.5.1-0                        | sha256:25f8c7 | 98.9MB |
| k8s.gcr.io/kube-apiserver                   | v1.23.2                        | sha256:8a0228 | 32.6MB |
| k8s.gcr.io/kube-scheduler                   | v1.23.2                        | sha256:6114d7 | 15.1MB |
| docker.io/kubernetesui/dashboard            | v2.3.1                         | sha256:e1482a | 66.9MB |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                         | sha256:a4ca41 | 13.6MB |
|---------------------------------------------|--------------------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 image ls --format json
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220127024724-6703 image ls --format json:
[{"id":"sha256:7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:36d5b3f60e1a144cc5ada820910535074bdf5cf73fb70d1ff1681537eef4e172"],"repoTags":["docker.io/kubernetesui/metrics-scraper:v1.0.7"],"size":"15029138"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220127024724-6703"],"size":"10823156"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d","repoDigests":["k8s.gcr.io/etcd@sha256:64b9ea357325d5db9f8a723dcf503b5a449177b17ac87d69481e126bb724c263"],"repoTags":["k8s.gcr.io/etcd:3.5.1-0"],"size":"988886
14"},{"id":"sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":["k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db"],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"301773"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"72306"},{"id":"sha256:4ad459df9d9f7dc0ef3a27c16a44f5d9a2ac290cb3a611b13df4fa6c9af55073","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220127024724-6703"],"size":"1737"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c1913617
23f9d5927ac287463b1bea633bf859475969"],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:bef258acf10dc257d641c47c3a600c92f87be4b4ce4a5e4752b3eade7533dcd9","repoDigests":["docker.io/library/nginx@sha256:da9c94bec1da829ebd52431a84502ec471c8e548ffb2cedbf36260fd9bd1d4d3"],"repoTags":["docker.io/library/nginx:alpine"],"size":"10175385"},{"id":"sha256:c316d5a335a5cf324b0dc83b3da82d7608724769f6454f6d9a621f3ec2534a5a","repoDigests":["docker.io/library/nginx@sha256:2834dc507516af02784808c5f48b7cbe38b8ed5d0f4837f16e78d00deb7e7767"],"repoTags":["docker.io/library/nginx:latest"],"size":"56733668"},{"id":"sha256:8a0228dd6a683beecf635200927ab22cc4d9fb4302c340cae4a4c4b2b146aa24","repoDigests":["k8s.gcr.io/kube-apiserver@sha256:63ede81b7e1fbb51669f4ee461481815f50eeed1f95e48558e3b8c3dace58a0f"],"repoTags":["k8s.gcr.io/kube-apiserver:v1.23.2"],"size":"32600500"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"31
5399"},{"id":"sha256:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","repoDigests":["docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"],"repoTags":["docker.io/kindest/kindnetd:v20210326-1e038dc5"],"size":"53960776"},{"id":"sha256:e1482a24335a6e76d438ae175f79409004588570d3e5dbb4c8140e025e848570","repoDigests":["docker.io/kubernetesui/dashboard@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e"],"repoTags":["docker.io/kubernetesui/dashboard:v2.3.1"],"size":"66934416"},{"id":"sha256:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":["k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e"],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"13585107"},{"id":"sha256:4783639ba7e039dff291e4a9cc8a72f5f7c5bdd7f3441b57d3b5eb251cacc248","repoDigests":["k8s.gcr.io/kube-controller-manager@sha256:d329c1d6597aa53939e5bd8aa5a0c856357324e5c1eae48d6b70fcbbbdf9
66c7"],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.23.2"],"size":"30163784"},{"id":"sha256:d922ca3da64b3f8464058d9ebbc361dd82cc86ea59cd337a4e33967bc8ede44f","repoDigests":["k8s.gcr.io/kube-proxy@sha256:ba5545c288ffd91a94a57c665355e7585c650122088bb818d06b74f2ce0c4a98"],"repoTags":["k8s.gcr.io/kube-proxy:v1.23.2"],"size":"39273671"},{"id":"sha256:6114d758d6d16d5b75586c98f8fb524d348fcbb125fb9be1e942dc7e91bbc5b4","repoDigests":["k8s.gcr.io/kube-scheduler@sha256:24f19a1f6aaa54110dde609168a599e15746e0756352e100503a8a4de44af3f1"],"repoTags":["k8s.gcr.io/kube-scheduler:v1.23.2"],"size":"15131426"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"297686"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 image ls --format yaml
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220127024724-6703 image ls --format yaml:
- id: sha256:e1482a24335a6e76d438ae175f79409004588570d3e5dbb4c8140e025e848570
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e
repoTags:
- docker.io/kubernetesui/dashboard:v2.3.1
size: "66934416"
- id: sha256:7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:36d5b3f60e1a144cc5ada820910535074bdf5cf73fb70d1ff1681537eef4e172
repoTags:
- docker.io/kubernetesui/metrics-scraper:v1.0.7
size: "15029138"
- id: sha256:4ad459df9d9f7dc0ef3a27c16a44f5d9a2ac290cb3a611b13df4fa6c9af55073
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220127024724-6703
size: "1737"
- id: sha256:bef258acf10dc257d641c47c3a600c92f87be4b4ce4a5e4752b3eade7533dcd9
repoDigests:
- docker.io/library/nginx@sha256:da9c94bec1da829ebd52431a84502ec471c8e548ffb2cedbf36260fd9bd1d4d3
repoTags:
- docker.io/library/nginx:alpine
size: "10175385"
- id: sha256:25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d
repoDigests:
- k8s.gcr.io/etcd@sha256:64b9ea357325d5db9f8a723dcf503b5a449177b17ac87d69481e126bb724c263
repoTags:
- k8s.gcr.io/etcd:3.5.1-0
size: "98888614"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "315399"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220127024724-6703
size: "10823156"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "46237695"
- id: sha256:8a0228dd6a683beecf635200927ab22cc4d9fb4302c340cae4a4c4b2b146aa24
repoDigests:
- k8s.gcr.io/kube-apiserver@sha256:63ede81b7e1fbb51669f4ee461481815f50eeed1f95e48558e3b8c3dace58a0f
repoTags:
- k8s.gcr.io/kube-apiserver:v1.23.2
size: "32600500"
- id: sha256:d922ca3da64b3f8464058d9ebbc361dd82cc86ea59cd337a4e33967bc8ede44f
repoDigests:
- k8s.gcr.io/kube-proxy@sha256:ba5545c288ffd91a94a57c665355e7585c650122088bb818d06b74f2ce0c4a98
repoTags:
- k8s.gcr.io/kube-proxy:v1.23.2
size: "39273671"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "72306"
- id: sha256:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb
repoDigests:
- docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c
repoTags:
- docker.io/kindest/kindnetd:v20210326-1e038dc5
size: "53960776"
- id: sha256:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests:
- k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "13585107"
- id: sha256:6114d758d6d16d5b75586c98f8fb524d348fcbb125fb9be1e942dc7e91bbc5b4
repoDigests:
- k8s.gcr.io/kube-scheduler@sha256:24f19a1f6aaa54110dde609168a599e15746e0756352e100503a8a4de44af3f1
repoTags:
- k8s.gcr.io/kube-scheduler:v1.23.2
size: "15131426"
- id: sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests:
- k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db
repoTags:
- k8s.gcr.io/pause:3.6
size: "301773"
- id: sha256:c316d5a335a5cf324b0dc83b3da82d7608724769f6454f6d9a621f3ec2534a5a
repoDigests:
- docker.io/library/nginx@sha256:2834dc507516af02784808c5f48b7cbe38b8ed5d0f4837f16e78d00deb7e7767
repoTags:
- docker.io/library/nginx:latest
size: "56733668"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:4783639ba7e039dff291e4a9cc8a72f5f7c5bdd7f3441b57d3b5eb251cacc248
repoDigests:
- k8s.gcr.io/kube-controller-manager@sha256:d329c1d6597aa53939e5bd8aa5a0c856357324e5c1eae48d6b70fcbbbdf966c7
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.23.2
size: "30163784"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "297686"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh pgrep buildkitd: exit status 1 (486.243479ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 image build -t localhost/my-image:functional-20220127024724-6703 testdata/build
2022/01/27 02:50:17 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p functional-20220127024724-6703 image build -t localhost/my-image:functional-20220127024724-6703 testdata/build: (2.921826306s)
functional_test.go:319: (dbg) Stderr: out/minikube-linux-amd64 -p functional-20220127024724-6703 image build -t localhost/my-image:functional-20220127024724-6703 testdata/build:
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.1s

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 0.7s

                                                
                                                
#5 [internal] load build context
#5 transferring context: 62B done
#5 DONE 0.0s

                                                
                                                
#7 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#7 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#7 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#7 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.2s done
#7 DONE 0.3s

                                                
                                                
#4 [2/3] RUN true
#4 DONE 0.6s

                                                
                                                
#6 [3/3] ADD content.txt /
#6 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.3s done
#8 exporting manifest sha256:6349e742b1ddda2736da5d27ec17b712d0994443dd86f81b53a13fe065594382 done
#8 exporting config sha256:35ff8832eb5b83f9ad5de76190966990ab3e614c8a8ce84bbb8081d8ec4178f7 0.1s done
#8 naming to localhost/my-image:functional-20220127024724-6703
#8 naming to localhost/my-image:functional-20220127024724-6703 done
#8 DONE 0.4s
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.058628426s)
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220127024724-6703
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1272: (dbg) Run:  out/minikube-linux-amd64 profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1277: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220127024724-6703

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p functional-20220127024724-6703 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220127024724-6703: (4.331891996s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.59s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1312: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1317: Took "424.11127ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1326: (dbg) Run:  out/minikube-linux-amd64 profile list -l

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1331: Took "92.325732ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:128: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20220127024724-6703 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:148: (dbg) Run:  kubectl --context functional-20220127024724-6703 apply -f testdata/testsvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:152: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:343: "nginx-svc" [26d798d7-019b-4f72-8ef8-a84fff445627] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:343: "nginx-svc" [26d798d7-019b-4f72-8ef8-a84fff445627] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:152: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.013458916s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1363: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1368: Took "463.867909ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1381: Took "73.791151ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220127024724-6703
E0127 02:49:58.539247    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220127024221-6703/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Done: out/minikube-linux-amd64 -p functional-20220127024724-6703 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220127024724-6703: (3.876147001s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220127024724-6703
functional_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220127024724-6703

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p functional-20220127024724-6703 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220127024724-6703: (4.835907656s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220127024724-6703 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:235: tunnel at http://10.100.28.23 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:370: (dbg) stopping [out/minikube-linux-amd64 -p functional-20220127024724-6703 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:76: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220127024724-6703 /tmp/mounttest1128748597:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:110: wrote "test-1643251805463584726" to /tmp/mounttest1128748597/created-by-test
functional_test_mount_test.go:110: wrote "test-1643251805463584726" to /tmp/mounttest1128748597/created-by-test-removed-by-pod
functional_test_mount_test.go:110: wrote "test-1643251805463584726" to /tmp/mounttest1128748597/test-1643251805463584726
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (412.802025ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:136: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 27 02:50 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 27 02:50 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 27 02:50 test-1643251805463584726
functional_test_mount_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh cat /mount-9p/test-1643251805463584726
functional_test_mount_test.go:151: (dbg) Run:  kubectl --context functional-20220127024724-6703 replace --force -f testdata/busybox-mount-test.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:343: "busybox-mount" [94e9e602-7566-4982-960f-8ddd2a34ed69] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [94e9e602-7566-4982-960f-8ddd2a34ed69] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [94e9e602-7566-4982-960f-8ddd2a34ed69] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 2.043407793s
functional_test_mount_test.go:172: (dbg) Run:  kubectl --context functional-20220127024724-6703 logs busybox-mount
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh stat /mount-9p/created-by-test

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh stat /mount-9p/created-by-pod

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:93: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:97: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220127024724-6703 /tmp/mounttest1128748597:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 image save gcr.io/google-containers/addon-resizer:functional-20220127024724-6703 /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Done: out/minikube-linux-amd64 -p functional-20220127024724-6703 image save gcr.io/google-containers/addon-resizer:functional-20220127024724-6703 /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar: (1.126554216s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 image rm gcr.io/google-containers/addon-resizer:functional-20220127024724-6703

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Done: out/minikube-linux-amd64 -p functional-20220127024724-6703 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar: (1.31898266s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220127024724-6703
functional_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220127024724-6703

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p functional-20220127024724-6703 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220127024724-6703: (1.057945792s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220127024724-6703
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:225: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220127024724-6703 /tmp/mounttest360950167:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (417.690618ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:269: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:273: guest mount directory contents
total 0
functional_test_mount_test.go:275: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220127024724-6703 /tmp/mounttest360950167:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:276: reading mount text
functional_test_mount_test.go:290: done reading mount text
functional_test_mount_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:242: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh "sudo umount -f /mount-9p": exit status 1 (385.106021ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:244: "out/minikube-linux-amd64 -p functional-20220127024724-6703 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:246: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220127024724-6703 /tmp/mounttest360950167:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.26s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220127024724-6703
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220127024724-6703
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220127024724-6703
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (79.54s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:40: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-20220127025035-6703 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0127 02:50:39.499809    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220127024221-6703/client.crt: no such file or directory
ingress_addon_legacy_test.go:40: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-20220127025035-6703 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m19.543054532s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (79.54s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.62s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220127025035-6703 addons enable ingress --alsologtostderr -v=5
E0127 02:52:01.420529    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220127024221-6703/client.crt: no such file or directory
ingress_addon_legacy_test.go:71: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220127025035-6703 addons enable ingress --alsologtostderr -v=5: (14.623410848s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.62s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.38s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:80: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220127025035-6703 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.38s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (65.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:163: (dbg) Run:  kubectl --context ingress-addon-legacy-20220127025035-6703 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:163: (dbg) Done: kubectl --context ingress-addon-legacy-20220127025035-6703 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (13.312268336s)
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20220127025035-6703 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:196: (dbg) Run:  kubectl --context ingress-addon-legacy-20220127025035-6703 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [dcd6b022-bf44-4525-8a79-9784ef3be7ef] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:343: "nginx" [dcd6b022-bf44-4525-8a79-9784ef3be7ef] Running
addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.005114724s
addons_test.go:213: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220127025035-6703 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:237: (dbg) Run:  kubectl --context ingress-addon-legacy-20220127025035-6703 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220127025035-6703 ip
addons_test.go:248: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220127025035-6703 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:257: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220127025035-6703 addons disable ingress-dns --alsologtostderr -v=1: (12.594662688s)
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220127025035-6703 addons disable ingress --alsologtostderr -v=1
addons_test.go:262: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220127025035-6703 addons disable ingress --alsologtostderr -v=1: (28.403383565s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (65.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (66.88s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-20220127025317-6703 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0127 02:54:17.577928    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220127024221-6703/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-20220127025317-6703 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m6.878864228s)
--- PASS: TestJSONOutput/start/Command (66.88s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-20220127025317-6703 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-20220127025317-6703 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (15.73s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-20220127025317-6703 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-20220127025317-6703 --output=json --user=testUser: (15.734133937s)
--- PASS: TestJSONOutput/stop/Command (15.73s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.27s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20220127025447-6703 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20220127025447-6703 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (62.152312ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"93962342-7def-44b3-8674-f2d28a7fc59b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220127025447-6703] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b4c19aa8-9afe-416f-a7c5-a76d1603a239","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13251"}}
	{"specversion":"1.0","id":"8ec39c83-9120-4b29-b760-1a67b40415b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"206ac04b-3285-4a3c-9f4b-dcdf87810e06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig"}}
	{"specversion":"1.0","id":"2c36f92d-ae34-42d2-95fa-e9612d7b31e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube"}}
	{"specversion":"1.0","id":"67c1e5ca-903d-43ba-8f74-7ee56dab29ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"cd6b82df-5caf-431e-a753-c47d1db10a42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-20220127025447-6703" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20220127025447-6703
--- PASS: TestErrorJSONOutput (0.27s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (35.06s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220127025447-6703 --network=
E0127 02:54:53.816030    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220127024724-6703/client.crt: no such file or directory
E0127 02:54:53.821274    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220127024724-6703/client.crt: no such file or directory
E0127 02:54:53.831529    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220127024724-6703/client.crt: no such file or directory
E0127 02:54:53.851788    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220127024724-6703/client.crt: no such file or directory
E0127 02:54:53.892066    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220127024724-6703/client.crt: no such file or directory
E0127 02:54:53.972274    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220127024724-6703/client.crt: no such file or directory
E0127 02:54:54.132693    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220127024724-6703/client.crt: no such file or directory
E0127 02:54:54.453250    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220127024724-6703/client.crt: no such file or directory
E0127 02:54:55.094170    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220127024724-6703/client.crt: no such file or directory
E0127 02:54:56.374925    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220127024724-6703/client.crt: no such file or directory
E0127 02:54:58.935608    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220127024724-6703/client.crt: no such file or directory
E0127 02:55:04.056568    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220127024724-6703/client.crt: no such file or directory
E0127 02:55:14.296763    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220127024724-6703/client.crt: no such file or directory
kic_custom_network_test.go:58: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220127025447-6703 --network=: (32.786263539s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20220127025447-6703" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220127025447-6703
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220127025447-6703: (2.238156914s)
--- PASS: TestKicCustomNetwork/create_custom_network (35.06s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (28.31s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220127025522-6703 --network=bridge
E0127 02:55:34.776988    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220127024724-6703/client.crt: no such file or directory
kic_custom_network_test.go:58: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220127025522-6703 --network=bridge: (26.214202928s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20220127025522-6703" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220127025522-6703
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220127025522-6703: (2.068471747s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (28.31s)

                                                
                                    
x
+
TestKicExistingNetwork (28.82s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:94: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-20220127025550-6703 --network=existing-network
E0127 02:56:15.737540    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220127024724-6703/client.crt: no such file or directory
kic_custom_network_test.go:94: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-20220127025550-6703 --network=existing-network: (26.322311122s)
helpers_test.go:176: Cleaning up "existing-network-20220127025550-6703" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-20220127025550-6703
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-20220127025550-6703: (2.283313721s)
--- PASS: TestKicExistingNetwork (28.82s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.01s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-20220127025619-6703 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-20220127025619-6703 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.014429373s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-20220127025619-6703 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.63s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220127025619-6703 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220127025619-6703 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.631382604s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220127025619-6703 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.85s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:134: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-20220127025619-6703 --alsologtostderr -v=5
pause_test.go:134: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-20220127025619-6703 --alsologtostderr -v=5: (1.854296093s)
--- PASS: TestMountStart/serial/DeleteFirst (1.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220127025619-6703 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.33s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:156: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-20220127025619-6703
mount_start_test.go:156: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-20220127025619-6703: (1.256670644s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.36s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:167: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220127025619-6703
mount_start_test.go:167: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220127025619-6703: (5.357991578s)
--- PASS: TestMountStart/serial/RestartStopped (6.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220127025619-6703 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (104.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220127025642-6703 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0127 02:57:09.641136    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220127025035-6703/client.crt: no such file or directory
E0127 02:57:09.646422    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220127025035-6703/client.crt: no such file or directory
E0127 02:57:09.656649    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220127025035-6703/client.crt: no such file or directory
E0127 02:57:09.676864    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220127025035-6703/client.crt: no such file or directory
E0127 02:57:09.717089    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220127025035-6703/client.crt: no such file or directory
E0127 02:57:09.797375    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220127025035-6703/client.crt: no such file or directory
E0127 02:57:09.957792    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220127025035-6703/client.crt: no such file or directory
E0127 02:57:10.278474    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220127025035-6703/client.crt: no such file or directory
E0127 02:57:10.919345    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220127025035-6703/client.crt: no such file or directory
E0127 02:57:12.200298    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220127025035-6703/client.crt: no such file or directory
E0127 02:57:14.761019    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220127025035-6703/client.crt: no such file or directory
E0127 02:57:19.881980    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220127025035-6703/client.crt: no such file or directory
E0127 02:57:30.123162    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220127025035-6703/client.crt: no such file or directory
E0127 02:57:37.657982    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220127024724-6703/client.crt: no such file or directory
E0127 02:57:50.603640    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220127025035-6703/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220127025642-6703 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m44.309406417s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (104.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220127025642-6703 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:491: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220127025642-6703 -- rollout status deployment/busybox
multinode_test.go:491: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20220127025642-6703 -- rollout status deployment/busybox: (1.775934714s)
multinode_test.go:497: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220127025642-6703 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220127025642-6703 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:517: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220127025642-6703 -- exec busybox-7978565885-8bx7b -- nslookup kubernetes.io
multinode_test.go:517: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220127025642-6703 -- exec busybox-7978565885-n4drd -- nslookup kubernetes.io
multinode_test.go:527: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220127025642-6703 -- exec busybox-7978565885-8bx7b -- nslookup kubernetes.default
multinode_test.go:527: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220127025642-6703 -- exec busybox-7978565885-n4drd -- nslookup kubernetes.default
multinode_test.go:535: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220127025642-6703 -- exec busybox-7978565885-8bx7b -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:535: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220127025642-6703 -- exec busybox-7978565885-n4drd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.40s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:545: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220127025642-6703 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:553: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220127025642-6703 -- exec busybox-7978565885-8bx7b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:561: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220127025642-6703 -- exec busybox-7978565885-8bx7b -- sh -c "ping -c 1 192.168.49.1"
multinode_test.go:553: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220127025642-6703 -- exec busybox-7978565885-n4drd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:561: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220127025642-6703 -- exec busybox-7978565885-n4drd -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220127025642-6703 -v 3 --alsologtostderr
E0127 02:58:31.564195    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220127025035-6703/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20220127025642-6703 -v 3 --alsologtostderr: (42.364729977s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.14s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 status --output json --alsologtostderr
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 cp testdata/cp-test.txt multinode-20220127025642-6703:/home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 ssh -n multinode-20220127025642-6703 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 cp multinode-20220127025642-6703:/home/docker/cp-test.txt /tmp/mk_cp_test3865698967/cp-test_multinode-20220127025642-6703.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 ssh -n multinode-20220127025642-6703 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 cp multinode-20220127025642-6703:/home/docker/cp-test.txt multinode-20220127025642-6703-m02:/home/docker/cp-test_multinode-20220127025642-6703_multinode-20220127025642-6703-m02.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 ssh -n multinode-20220127025642-6703 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 ssh -n multinode-20220127025642-6703-m02 "sudo cat /home/docker/cp-test_multinode-20220127025642-6703_multinode-20220127025642-6703-m02.txt"
E0127 02:59:17.578020    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220127024221-6703/client.crt: no such file or directory
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 cp multinode-20220127025642-6703:/home/docker/cp-test.txt multinode-20220127025642-6703-m03:/home/docker/cp-test_multinode-20220127025642-6703_multinode-20220127025642-6703-m03.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 ssh -n multinode-20220127025642-6703 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 ssh -n multinode-20220127025642-6703-m03 "sudo cat /home/docker/cp-test_multinode-20220127025642-6703_multinode-20220127025642-6703-m03.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 cp testdata/cp-test.txt multinode-20220127025642-6703-m02:/home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 ssh -n multinode-20220127025642-6703-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 cp multinode-20220127025642-6703-m02:/home/docker/cp-test.txt /tmp/mk_cp_test3865698967/cp-test_multinode-20220127025642-6703-m02.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 ssh -n multinode-20220127025642-6703-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 cp multinode-20220127025642-6703-m02:/home/docker/cp-test.txt multinode-20220127025642-6703:/home/docker/cp-test_multinode-20220127025642-6703-m02_multinode-20220127025642-6703.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 ssh -n multinode-20220127025642-6703-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 ssh -n multinode-20220127025642-6703 "sudo cat /home/docker/cp-test_multinode-20220127025642-6703-m02_multinode-20220127025642-6703.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 cp multinode-20220127025642-6703-m02:/home/docker/cp-test.txt multinode-20220127025642-6703-m03:/home/docker/cp-test_multinode-20220127025642-6703-m02_multinode-20220127025642-6703-m03.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 ssh -n multinode-20220127025642-6703-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 ssh -n multinode-20220127025642-6703-m03 "sudo cat /home/docker/cp-test_multinode-20220127025642-6703-m02_multinode-20220127025642-6703-m03.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 cp testdata/cp-test.txt multinode-20220127025642-6703-m03:/home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 ssh -n multinode-20220127025642-6703-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 cp multinode-20220127025642-6703-m03:/home/docker/cp-test.txt /tmp/mk_cp_test3865698967/cp-test_multinode-20220127025642-6703-m03.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 ssh -n multinode-20220127025642-6703-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 cp multinode-20220127025642-6703-m03:/home/docker/cp-test.txt multinode-20220127025642-6703:/home/docker/cp-test_multinode-20220127025642-6703-m03_multinode-20220127025642-6703.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 ssh -n multinode-20220127025642-6703-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 ssh -n multinode-20220127025642-6703 "sudo cat /home/docker/cp-test_multinode-20220127025642-6703-m03_multinode-20220127025642-6703.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 cp multinode-20220127025642-6703-m03:/home/docker/cp-test.txt multinode-20220127025642-6703-m02:/home/docker/cp-test_multinode-20220127025642-6703-m03_multinode-20220127025642-6703-m02.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 ssh -n multinode-20220127025642-6703-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 ssh -n multinode-20220127025642-6703-m02 "sudo cat /home/docker/cp-test_multinode-20220127025642-6703-m03_multinode-20220127025642-6703-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.77s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (21.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:215: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 node stop m03
multinode_test.go:215: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220127025642-6703 node stop m03: (20.034559701s)
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 status
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220127025642-6703 status: exit status 7 (609.259199ms)

                                                
                                                
-- stdout --
	multinode-20220127025642-6703
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220127025642-6703-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220127025642-6703-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 status --alsologtostderr
multinode_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220127025642-6703 status --alsologtostderr: exit status 7 (603.307548ms)

                                                
                                                
-- stdout --
	multinode-20220127025642-6703
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220127025642-6703-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220127025642-6703-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 02:59:47.109575   83848 out.go:297] Setting OutFile to fd 1 ...
	I0127 02:59:47.109684   83848 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0127 02:59:47.109695   83848 out.go:310] Setting ErrFile to fd 2...
	I0127 02:59:47.109701   83848 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0127 02:59:47.109818   83848 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/bin
	I0127 02:59:47.109991   83848 out.go:304] Setting JSON to false
	I0127 02:59:47.110016   83848 mustload.go:65] Loading cluster: multinode-20220127025642-6703
	I0127 02:59:47.110353   83848 config.go:176] Loaded profile config "multinode-20220127025642-6703": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.2
	I0127 02:59:47.110370   83848 status.go:253] checking status of multinode-20220127025642-6703 ...
	I0127 02:59:47.110764   83848 cli_runner.go:133] Run: docker container inspect multinode-20220127025642-6703 --format={{.State.Status}}
	I0127 02:59:47.143145   83848 status.go:328] multinode-20220127025642-6703 host status = "Running" (err=<nil>)
	I0127 02:59:47.143180   83848 host.go:66] Checking if "multinode-20220127025642-6703" exists ...
	I0127 02:59:47.143437   83848 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220127025642-6703
	I0127 02:59:47.176039   83848 host.go:66] Checking if "multinode-20220127025642-6703" exists ...
	I0127 02:59:47.176397   83848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 02:59:47.176460   83848 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220127025642-6703
	I0127 02:59:47.208781   83848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49212 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/multinode-20220127025642-6703/id_rsa Username:docker}
	I0127 02:59:47.299753   83848 ssh_runner.go:195] Run: systemctl --version
	I0127 02:59:47.303278   83848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 02:59:47.312459   83848 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0127 02:59:47.396937   83848 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-01-27 02:59:47.340649707 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1028-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668890624 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0127 02:59:47.398259   83848 kubeconfig.go:92] found "multinode-20220127025642-6703" server: "https://192.168.49.2:8443"
	I0127 02:59:47.398302   83848 api_server.go:165] Checking apiserver status ...
	I0127 02:59:47.398335   83848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 02:59:47.415661   83848 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup
	I0127 02:59:47.422575   83848 api_server.go:181] apiserver freezer: "8:freezer:/docker/4ff3f19bc922d223544ac618f3097825c25045a3ccab370d148cf4b1c08ef5d2/kubepods/burstable/poda78670ea6fdd1c092708526b3afdedfd/d7f206aa5bd475003748b72f73ea1d8589e496a3d9f79a11f79da7b74aa2def9"
	I0127 02:59:47.422636   83848 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4ff3f19bc922d223544ac618f3097825c25045a3ccab370d148cf4b1c08ef5d2/kubepods/burstable/poda78670ea6fdd1c092708526b3afdedfd/d7f206aa5bd475003748b72f73ea1d8589e496a3d9f79a11f79da7b74aa2def9/freezer.state
	I0127 02:59:47.428849   83848 api_server.go:203] freezer state: "THAWED"
	I0127 02:59:47.428879   83848 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0127 02:59:47.433626   83848 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0127 02:59:47.433647   83848 status.go:419] multinode-20220127025642-6703 apiserver status = Running (err=<nil>)
	I0127 02:59:47.433656   83848 status.go:255] multinode-20220127025642-6703 status: &{Name:multinode-20220127025642-6703 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 02:59:47.433669   83848 status.go:253] checking status of multinode-20220127025642-6703-m02 ...
	I0127 02:59:47.433899   83848 cli_runner.go:133] Run: docker container inspect multinode-20220127025642-6703-m02 --format={{.State.Status}}
	I0127 02:59:47.465461   83848 status.go:328] multinode-20220127025642-6703-m02 host status = "Running" (err=<nil>)
	I0127 02:59:47.465482   83848 host.go:66] Checking if "multinode-20220127025642-6703-m02" exists ...
	I0127 02:59:47.465715   83848 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220127025642-6703-m02
	I0127 02:59:47.495824   83848 host.go:66] Checking if "multinode-20220127025642-6703-m02" exists ...
	I0127 02:59:47.496045   83848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 02:59:47.496081   83848 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220127025642-6703-m02
	I0127 02:59:47.526174   83848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49217 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/multinode-20220127025642-6703-m02/id_rsa Username:docker}
	I0127 02:59:47.615340   83848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 02:59:47.624070   83848 status.go:255] multinode-20220127025642-6703-m02 status: &{Name:multinode-20220127025642-6703-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0127 02:59:47.624120   83848 status.go:253] checking status of multinode-20220127025642-6703-m03 ...
	I0127 02:59:47.624414   83848 cli_runner.go:133] Run: docker container inspect multinode-20220127025642-6703-m03 --format={{.State.Status}}
	I0127 02:59:47.656281   83848 status.go:328] multinode-20220127025642-6703-m03 host status = "Stopped" (err=<nil>)
	I0127 02:59:47.656315   83848 status.go:341] host is not running, skipping remaining checks
	I0127 02:59:47.656323   83848 status.go:255] multinode-20220127025642-6703-m03 status: &{Name:multinode-20220127025642-6703-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (21.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (36.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:249: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 node start m03 --alsologtostderr
E0127 02:59:53.485317    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220127025035-6703/client.crt: no such file or directory
E0127 02:59:53.815804    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220127024724-6703/client.crt: no such file or directory
E0127 03:00:21.498170    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220127024724-6703/client.crt: no such file or directory
multinode_test.go:259: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220127025642-6703 node start m03 --alsologtostderr: (35.253754387s)
multinode_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 status
multinode_test.go:280: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (36.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (190.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220127025642-6703
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-20220127025642-6703
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-20220127025642-6703: (59.905903536s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220127025642-6703 --wait=true -v=8 --alsologtostderr
E0127 03:02:09.641455    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220127025035-6703/client.crt: no such file or directory
E0127 03:02:37.326644    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220127025035-6703/client.crt: no such file or directory
multinode_test.go:300: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220127025642-6703 --wait=true -v=8 --alsologtostderr: (2m10.240843421s)
multinode_test.go:305: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220127025642-6703
--- PASS: TestMultiNode/serial/RestartKeepsNodes (190.26s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (24.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:399: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 node delete m03
multinode_test.go:399: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220127025642-6703 node delete m03: (23.346457424s)
multinode_test.go:405: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 status --alsologtostderr
multinode_test.go:419: (dbg) Run:  docker volume ls
multinode_test.go:429: (dbg) Run:  kubectl get nodes
multinode_test.go:437: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (24.15s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (40.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:319: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 stop
E0127 03:04:17.578034    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220127024221-6703/client.crt: no such file or directory
multinode_test.go:319: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220127025642-6703 stop: (40.057370312s)
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 status
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220127025642-6703 status: exit status 7 (119.201903ms)

                                                
                                                
-- stdout --
	multinode-20220127025642-6703
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220127025642-6703-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 status --alsologtostderr
multinode_test.go:332: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220127025642-6703 status --alsologtostderr: exit status 7 (117.071416ms)

                                                
                                                
-- stdout --
	multinode-20220127025642-6703
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220127025642-6703-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 03:04:38.407362   94698 out.go:297] Setting OutFile to fd 1 ...
	I0127 03:04:38.407453   94698 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0127 03:04:38.407462   94698 out.go:310] Setting ErrFile to fd 2...
	I0127 03:04:38.407466   94698 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0127 03:04:38.407557   94698 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/bin
	I0127 03:04:38.407698   94698 out.go:304] Setting JSON to false
	I0127 03:04:38.407716   94698 mustload.go:65] Loading cluster: multinode-20220127025642-6703
	I0127 03:04:38.408042   94698 config.go:176] Loaded profile config "multinode-20220127025642-6703": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.2
	I0127 03:04:38.408059   94698 status.go:253] checking status of multinode-20220127025642-6703 ...
	I0127 03:04:38.408416   94698 cli_runner.go:133] Run: docker container inspect multinode-20220127025642-6703 --format={{.State.Status}}
	I0127 03:04:38.438693   94698 status.go:328] multinode-20220127025642-6703 host status = "Stopped" (err=<nil>)
	I0127 03:04:38.438741   94698 status.go:341] host is not running, skipping remaining checks
	I0127 03:04:38.438749   94698 status.go:255] multinode-20220127025642-6703 status: &{Name:multinode-20220127025642-6703 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 03:04:38.438783   94698 status.go:253] checking status of multinode-20220127025642-6703-m02 ...
	I0127 03:04:38.439028   94698 cli_runner.go:133] Run: docker container inspect multinode-20220127025642-6703-m02 --format={{.State.Status}}
	I0127 03:04:38.468289   94698 status.go:328] multinode-20220127025642-6703-m02 host status = "Stopped" (err=<nil>)
	I0127 03:04:38.468312   94698 status.go:341] host is not running, skipping remaining checks
	I0127 03:04:38.468318   94698 status.go:255] multinode-20220127025642-6703-m02 status: &{Name:multinode-20220127025642-6703-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (40.29s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (95.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:349: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:359: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220127025642-6703 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0127 03:04:53.818272    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220127024724-6703/client.crt: no such file or directory
E0127 03:05:40.622296    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220127024221-6703/client.crt: no such file or directory
multinode_test.go:359: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220127025642-6703 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m34.696595388s)
multinode_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220127025642-6703 status --alsologtostderr
multinode_test.go:379: (dbg) Run:  kubectl get nodes
multinode_test.go:387: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (95.41s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:448: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220127025642-6703
multinode_test.go:457: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220127025642-6703-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:457: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20220127025642-6703-m02 --driver=docker  --container-runtime=containerd: exit status 14 (72.006801ms)

                                                
                                                
-- stdout --
	* [multinode-20220127025642-6703-m02] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13251
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220127025642-6703-m02' is duplicated with machine name 'multinode-20220127025642-6703-m02' in profile 'multinode-20220127025642-6703'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220127025642-6703-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:465: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220127025642-6703-m03 --driver=docker  --container-runtime=containerd: (41.939136221s)
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220127025642-6703
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20220127025642-6703: exit status 80 (344.298148ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220127025642-6703
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220127025642-6703-m03 already exists in multinode-20220127025642-6703-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:477: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20220127025642-6703-m03
multinode_test.go:477: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20220127025642-6703-m03: (2.624174365s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.04s)

                                                
                                    
x
+
TestPreload (151.16s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220127030703-6703 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.0
E0127 03:07:09.641553    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220127025035-6703/client.crt: no such file or directory
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220127030703-6703 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.0: (1m20.832730992s)
preload_test.go:62: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220127030703-6703 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:62: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20220127030703-6703 -- sudo crictl pull gcr.io/k8s-minikube/busybox: (1.076107802s)
preload_test.go:72: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220127030703-6703 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.3
E0127 03:09:17.578291    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220127024221-6703/client.crt: no such file or directory
preload_test.go:72: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220127030703-6703 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.3: (1m6.282225638s)
preload_test.go:81: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220127030703-6703 -- sudo crictl image ls
helpers_test.go:176: Cleaning up "test-preload-20220127030703-6703" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20220127030703-6703
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20220127030703-6703: (2.606896296s)
--- PASS: TestPreload (151.16s)

                                                
                                    
x
+
TestScheduledStopUnix (118.35s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:129: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20220127030934-6703 --memory=2048 --driver=docker  --container-runtime=containerd
E0127 03:09:53.815260    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220127024724-6703/client.crt: no such file or directory
scheduled_stop_test.go:129: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20220127030934-6703 --memory=2048 --driver=docker  --container-runtime=containerd: (41.433394436s)
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220127030934-6703 --schedule 5m
scheduled_stop_test.go:192: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220127030934-6703 -n scheduled-stop-20220127030934-6703
scheduled_stop_test.go:170: signal error was:  <nil>
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220127030934-6703 --schedule 15s
scheduled_stop_test.go:170: signal error was:  os: process already finished
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220127030934-6703 --cancel-scheduled
scheduled_stop_test.go:177: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220127030934-6703 -n scheduled-stop-20220127030934-6703
scheduled_stop_test.go:206: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220127030934-6703
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220127030934-6703 --schedule 15s
scheduled_stop_test.go:170: signal error was:  os: process already finished
E0127 03:11:16.859500    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220127024724-6703/client.crt: no such file or directory
scheduled_stop_test.go:206: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220127030934-6703
scheduled_stop_test.go:206: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-20220127030934-6703: exit status 7 (89.438767ms)

                                                
                                                
-- stdout --
	scheduled-stop-20220127030934-6703
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:177: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220127030934-6703 -n scheduled-stop-20220127030934-6703
scheduled_stop_test.go:177: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220127030934-6703 -n scheduled-stop-20220127030934-6703: exit status 7 (88.65818ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:177: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-20220127030934-6703" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20220127030934-6703
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-20220127030934-6703: (5.204595416s)
--- PASS: TestScheduledStopUnix (118.35s)

                                                
                                    
x
+
TestInsufficientStorage (18.54s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-20220127031132-6703 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:51: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-20220127031132-6703 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (11.756093652s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e4c11ac6-e5fc-41bd-8fa4-e740d97f7187","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220127031132-6703] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"084661b0-3f8c-4a48-8760-b57fc1cc87a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13251"}}
	{"specversion":"1.0","id":"e6ba385b-cf63-4f76-8d10-8ffcd607db43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4c23fbb9-15bd-4268-a0d2-792710687c9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig"}}
	{"specversion":"1.0","id":"5190fe03-750e-4cbc-bcab-b5455e765015","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube"}}
	{"specversion":"1.0","id":"cf96c40b-6e79-4c5a-bf47-e3e8c8d4002b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"26c9ad02-08ec-46ec-aeb3-16b7c6ea94ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b3002daa-8b62-4108-8848-b5e682faae48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"48759d4c-f464-4e76-b839-2aaaba257311","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"Your cgroup does not allow setting memory."}}
	{"specversion":"1.0","id":"d0f7100b-75fe-45c6-bf94-fabcae674320","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities"}}
	{"specversion":"1.0","id":"2a7f7a15-fd16-4757-96cf-718a85e525f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220127031132-6703 in cluster insufficient-storage-20220127031132-6703","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"fc695ae2-63c4-4ca6-afdc-8a8ec6284f6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"1fda62c9-0114-4e7b-adbf-22d51040b6c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"73ff94e3-7a25-469b-8450-35161d416aa0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:77: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220127031132-6703 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220127031132-6703 --output=json --layout=cluster: exit status 7 (344.582387ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220127031132-6703","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.25.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220127031132-6703","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 03:11:44.982415  116452 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220127031132-6703" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig

                                                
                                                
** /stderr **
status_test.go:77: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220127031132-6703 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220127031132-6703 --output=json --layout=cluster: exit status 7 (342.167239ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220127031132-6703","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.25.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220127031132-6703","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 03:11:45.324756  116551 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220127031132-6703" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
	E0127 03:11:45.336485  116551 status.go:557] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/insufficient-storage-20220127031132-6703/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-20220127031132-6703" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-20220127031132-6703
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-20220127031132-6703: (6.096596336s)
--- PASS: TestInsufficientStorage (18.54s)

                                                
                                    
x
+
TestKubernetesUpgrade (211.03s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220127031320-6703 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220127031320-6703 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m6.181890105s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220127031320-6703

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220127031320-6703: (26.344454387s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20220127031320-6703 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20220127031320-6703 status --format={{.Host}}: exit status 7 (111.847686ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220127031320-6703 --memory=2200 --kubernetes-version=v1.23.3-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0127 03:14:53.815831    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220127024724-6703/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220127031320-6703 --memory=2200 --kubernetes-version=v1.23.3-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m9.181965741s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220127031320-6703 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220127031320-6703 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220127031320-6703 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd: exit status 106 (87.015104ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220127031320-6703] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13251
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.23.3-rc.0 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20220127031320-6703
	    minikube start -p kubernetes-upgrade-20220127031320-6703 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220127031320-67032 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.23.3-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220127031320-6703 --kubernetes-version=v1.23.3-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220127031320-6703 --memory=2200 --kubernetes-version=v1.23.3-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:282: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220127031320-6703 --memory=2200 --kubernetes-version=v1.23.3-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (45.774994592s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-20220127031320-6703" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220127031320-6703
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220127031320-6703: (3.286601904s)
--- PASS: TestKubernetesUpgrade (211.03s)

                                                
                                    
x
+
TestMissingContainerUpgrade (151s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /tmp/minikube-v1.9.1.2254167960.exe start -p missing-upgrade-20220127031307-6703 --memory=2200 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: /tmp/minikube-v1.9.1.2254167960.exe start -p missing-upgrade-20220127031307-6703 --memory=2200 --driver=docker  --container-runtime=containerd: (1m6.359207795s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-20220127031307-6703
E0127 03:14:17.578446    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220127024221-6703/client.crt: no such file or directory

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20220127031307-6703: (12.359668394s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-20220127031307-6703
version_upgrade_test.go:336: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-20220127031307-6703 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-20220127031307-6703 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m6.60002125s)
helpers_test.go:176: Cleaning up "missing-upgrade-20220127031307-6703" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-20220127031307-6703

                                                
                                                
=== CONT  TestMissingContainerUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-20220127031307-6703: (5.128385182s)
--- PASS: TestMissingContainerUpgrade (151.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:84: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220127031151-6703 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:84: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-20220127031151-6703 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (101.39806ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220127031151-6703] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13251
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (70.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220127031151-6703 --driver=docker  --container-runtime=containerd
E0127 03:12:09.641841    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220127025035-6703/client.crt: no such file or directory
no_kubernetes_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220127031151-6703 --driver=docker  --container-runtime=containerd: (1m9.643157392s)
no_kubernetes_test.go:201: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220127031151-6703 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (70.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:113: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220127031151-6703 --no-kubernetes --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:113: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220127031151-6703 --no-kubernetes --driver=docker  --container-runtime=containerd: (14.09482104s)
no_kubernetes_test.go:201: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220127031151-6703 status -o json
no_kubernetes_test.go:201: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-20220127031151-6703 status -o json: exit status 2 (472.775622ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220127031151-6703","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:125: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-20220127031151-6703

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:125: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-20220127031151-6703: (4.000717056s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220127031151-6703 --no-kubernetes --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220127031151-6703 --no-kubernetes --driver=docker  --container-runtime=containerd: (9.584707533s)
--- PASS: TestNoKubernetes/serial/Start (9.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:148: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220127031151-6703 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:148: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220127031151-6703 "sudo systemctl is-active --quiet service kubelet": exit status 1 (415.66495ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:170: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:180: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:159: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-20220127031151-6703
E0127 03:13:32.687027    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220127025035-6703/client.crt: no such file or directory
no_kubernetes_test.go:159: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-20220127031151-6703: (1.292967126s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:192: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220127031151-6703 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:192: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220127031151-6703 --driver=docker  --container-runtime=containerd: (5.52247439s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (5.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:148: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220127031151-6703 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:148: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220127031151-6703 "sudo systemctl is-active --quiet service kubelet": exit status 1 (394.344007ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (114.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /tmp/minikube-v1.16.0.2152177058.exe start -p stopped-upgrade-20220127031342-6703 --memory=2200 --vm-driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: /tmp/minikube-v1.16.0.2152177058.exe start -p stopped-upgrade-20220127031342-6703 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (38.600925454s)
version_upgrade_test.go:199: (dbg) Run:  /tmp/minikube-v1.16.0.2152177058.exe -p stopped-upgrade-20220127031342-6703 stop
version_upgrade_test.go:199: (dbg) Done: /tmp/minikube-v1.16.0.2152177058.exe -p stopped-upgrade-20220127031342-6703 stop: (1.279036366s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-20220127031342-6703 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-20220127031342-6703 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m14.928559184s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (114.81s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-20220127031342-6703
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.91s)

                                                
                                    
x
+
TestPause/serial/Start (64.61s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:82: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220127031541-6703 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:82: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220127031541-6703 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m4.611179422s)
--- PASS: TestPause/serial/Start (64.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (0.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:214: (dbg) Run:  out/minikube-linux-amd64 start -p false-20220127031623-6703 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-20220127031623-6703 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (266.403685ms)

                                                
                                                
-- stdout --
	* [false-20220127031623-6703] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13251
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 03:16:24.023090  160149 out.go:297] Setting OutFile to fd 1 ...
	I0127 03:16:24.023230  160149 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0127 03:16:24.023243  160149 out.go:310] Setting ErrFile to fd 2...
	I0127 03:16:24.023249  160149 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0127 03:16:24.023386  160149 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/bin
	I0127 03:16:24.023783  160149 out.go:304] Setting JSON to false
	I0127 03:16:24.025664  160149 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3538,"bootTime":1643249846,"procs":609,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1028-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 03:16:24.025760  160149 start.go:122] virtualization: kvm guest
	I0127 03:16:24.028065  160149 out.go:176] * [false-20220127031623-6703] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
	I0127 03:16:24.029931  160149 out.go:176]   - MINIKUBE_LOCATION=13251
	I0127 03:16:24.028243  160149 notify.go:174] Checking for updates...
	I0127 03:16:24.031554  160149 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 03:16:24.032993  160149 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
	I0127 03:16:24.035244  160149 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube
	I0127 03:16:24.037049  160149 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 03:16:24.037824  160149 config.go:176] Loaded profile config "kubernetes-upgrade-20220127031320-6703": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3-rc.0
	I0127 03:16:24.037982  160149 config.go:176] Loaded profile config "pause-20220127031541-6703": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.2
	I0127 03:16:24.038073  160149 config.go:176] Loaded profile config "running-upgrade-20220127031538-6703": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0127 03:16:24.038118  160149 driver.go:344] Setting default libvirt URI to qemu:///system
	I0127 03:16:24.086486  160149 docker.go:132] docker version: linux-20.10.12
	I0127 03:16:24.086602  160149 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0127 03:16:24.197448  160149 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:68 SystemTime:2022-01-27 03:16:24.120329198 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1028-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668890624 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I0127 03:16:24.197542  160149 docker.go:237] overlay module found
	I0127 03:16:24.199572  160149 out.go:176] * Using the docker driver based on user configuration
	I0127 03:16:24.199600  160149 start.go:281] selected driver: docker
	I0127 03:16:24.199606  160149 start.go:798] validating driver "docker" against <nil>
	I0127 03:16:24.199623  160149 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0127 03:16:24.199661  160149 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0127 03:16:24.199680  160149 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0127 03:16:24.201079  160149 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0127 03:16:24.203147  160149 out.go:176] 
	W0127 03:16:24.203256  160149 out.go:241] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0127 03:16:24.204772  160149 out.go:176] 

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "false-20220127031623-6703" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-20220127031623-6703
--- PASS: TestNetworkPlugins/group/false (0.72s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (16.61s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:94: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220127031541-6703 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:94: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220127031541-6703 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (16.592428311s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (16.61s)

                                                
                                    
x
+
TestPause/serial/Pause (1.2s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:112: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20220127031541-6703 --alsologtostderr -v=5
pause_test.go:112: (dbg) Done: out/minikube-linux-amd64 pause -p pause-20220127031541-6703 --alsologtostderr -v=5: (1.200143742s)
--- PASS: TestPause/serial/Pause (1.20s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.49s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:77: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20220127031541-6703 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20220127031541-6703 --output=json --layout=cluster: exit status 2 (490.616892ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20220127031541-6703","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.25.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220127031541-6703","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.49s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.95s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:123: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20220127031541-6703 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.95s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (5.57s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:112: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20220127031541-6703 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/PauseAgain
pause_test.go:112: (dbg) Done: out/minikube-linux-amd64 pause -p pause-20220127031541-6703 --alsologtostderr -v=5: (5.570885472s)
--- PASS: TestPause/serial/PauseAgain (5.57s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.86s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:134: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20220127031541-6703 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/DeletePaused
pause_test.go:134: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20220127031541-6703 --alsologtostderr -v=5: (3.856366448s)
--- PASS: TestPause/serial/DeletePaused (3.86s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (2.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:144: (dbg) Run:  out/minikube-linux-amd64 profile list --output json

                                                
                                                
=== CONT  TestPause/serial/VerifyDeletedResources
pause_test.go:144: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (2.18752952s)
pause_test.go:170: (dbg) Run:  docker ps -a
pause_test.go:175: (dbg) Run:  docker volume inspect pause-20220127031541-6703
pause_test.go:175: (dbg) Non-zero exit: docker volume inspect pause-20220127031541-6703: exit status 1 (35.179188ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20220127031541-6703

                                                
                                                
** /stderr **
pause_test.go:180: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (2.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (334.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220127031714-6703 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20220127031714-6703 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (5m34.618143326s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (334.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (82.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220127031716-6703 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.3-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20220127031716-6703 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.3-rc.0: (1m22.859794877s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (82.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (75.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220127031716-6703 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.2

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20220127031716-6703 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.2: (1m15.94658962s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (75.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (61.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220127031757-6703 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.2

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20220127031757-6703 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.2: (1m1.129636267s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (61.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20220127031716-6703 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [93edbfb5-5df8-455f-b8ce-0cce9deb97b7] Pending
helpers_test.go:343: "busybox" [93edbfb5-5df8-455f-b8ce-0cce9deb97b7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [93edbfb5-5df8-455f-b8ce-0cce9deb97b7] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:181: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.015994591s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20220127031716-6703 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20220127031716-6703 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [848576cb-30f7-4a7d-889a-6d11f24a4d45] Pending

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:343: "busybox" [848576cb-30f7-4a7d-889a-6d11f24a4d45] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:343: "busybox" [848576cb-30f7-4a7d-889a-6d11f24a4d45] Running
start_stop_delete_test.go:181: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.010492192s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20220127031716-6703 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20220127031716-6703 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context embed-certs-20220127031716-6703 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (20.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20220127031716-6703 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20220127031716-6703 --alsologtostderr -v=3: (20.18491125s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (20.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20220127031716-6703 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context no-preload-20220127031716-6703 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (20.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20220127031716-6703 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20220127031716-6703 --alsologtostderr -v=3: (20.206209647s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (20.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (8.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20220127031757-6703 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [bb202125-c242-4a81-896d-0fa59483a042] Pending
helpers_test.go:343: "busybox" [bb202125-c242-4a81-896d-0fa59483a042] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [bb202125-c242-4a81-896d-0fa59483a042] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:181: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 8.011687301s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20220127031757-6703 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (8.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220127031716-6703 -n embed-certs-20220127031716-6703
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220127031716-6703 -n embed-certs-20220127031716-6703: exit status 7 (91.554889ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20220127031716-6703 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (59.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220127031716-6703 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.2

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20220127031716-6703 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.2: (58.668460097s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220127031716-6703 -n embed-certs-20220127031716-6703
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (59.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20220127031757-6703 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context default-k8s-different-port-20220127031757-6703 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (20.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20220127031757-6703 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20220127031757-6703 --alsologtostderr -v=3: (20.170403263s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (20.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220127031716-6703 -n no-preload-20220127031716-6703
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220127031716-6703 -n no-preload-20220127031716-6703: exit status 7 (129.812978ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20220127031716-6703 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (90.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220127031716-6703 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.3-rc.0
E0127 03:19:17.578446    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220127024221-6703/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20220127031716-6703 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.3-rc.0: (1m30.15117863s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220127031716-6703 -n no-preload-20220127031716-6703
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (90.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220127031757-6703 -n default-k8s-different-port-20220127031757-6703
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220127031757-6703 -n default-k8s-different-port-20220127031757-6703: exit status 7 (92.324836ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20220127031757-6703 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (58.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220127031757-6703 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.2
E0127 03:19:53.815779    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/functional-20220127024724-6703/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20220127031757-6703 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.2: (58.055674474s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220127031757-6703 -n default-k8s-different-port-20220127031757-6703
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (58.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-lhjzl" [3b2e3b3a-75a9-451c-8685-6e60919ebdbd] Running
start_stop_delete_test.go:259: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01095329s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-lhjzl" [3b2e3b3a-75a9-451c-8685-6e60919ebdbd] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005777467s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context embed-certs-20220127031716-6703 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20220127031716-6703 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20220127031716-6703 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220127031716-6703 -n embed-certs-20220127031716-6703
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220127031716-6703 -n embed-certs-20220127031716-6703: exit status 2 (398.977344ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220127031716-6703 -n embed-certs-20220127031716-6703
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220127031716-6703 -n embed-certs-20220127031716-6703: exit status 2 (390.874005ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-20220127031716-6703 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220127031716-6703 -n embed-certs-20220127031716-6703
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220127031716-6703 -n embed-certs-20220127031716-6703
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (59.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220127032017-6703 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.3-rc.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220127032017-6703 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.3-rc.0: (59.706498953s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (59.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-7fxjz" [0ad4b3e0-ca8f-4c05-8a03-4e2984c38ff7] Running
start_stop_delete_test.go:259: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011269938s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-7fxjz" [0ad4b3e0-ca8f-4c05-8a03-4e2984c38ff7] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005993427s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20220127031757-6703 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-different-port-20220127031757-6703 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (3.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-different-port-20220127031757-6703 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220127031757-6703 -n default-k8s-different-port-20220127031757-6703
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220127031757-6703 -n default-k8s-different-port-20220127031757-6703: exit status 2 (431.039586ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220127031757-6703 -n default-k8s-different-port-20220127031757-6703
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220127031757-6703 -n default-k8s-different-port-20220127031757-6703: exit status 2 (424.228436ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-different-port-20220127031757-6703 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220127031757-6703 -n default-k8s-different-port-20220127031757-6703

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220127031757-6703 -n default-k8s-different-port-20220127031757-6703
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (3.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-t9f47" [c488d42e-e783-49be-8b93-93132f4d4082] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012625102s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-t9f47" [c488d42e-e783-49be-8b93-93132f4d4082] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007211753s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context no-preload-20220127031716-6703 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (58.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20220127031622-6703 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p auto-20220127031622-6703 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd: (58.376166885s)
--- PASS: TestNetworkPlugins/group/auto/Start (58.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-20220127031716-6703 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-20220127031716-6703 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220127031716-6703 -n no-preload-20220127031716-6703
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220127031716-6703 -n no-preload-20220127031716-6703: exit status 2 (490.759268ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220127031716-6703 -n no-preload-20220127031716-6703
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220127031716-6703 -n no-preload-20220127031716-6703: exit status 2 (474.95881ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-20220127031716-6703 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220127031716-6703 -n no-preload-20220127031716-6703
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220127031716-6703 -n no-preload-20220127031716-6703
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (92.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20220127031624-6703 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p calico-20220127031624-6703 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd: (1m32.486909085s)
--- PASS: TestNetworkPlugins/group/calico/Start (92.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20220127032017-6703 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:196: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (20.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20220127032017-6703 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20220127032017-6703 --alsologtostderr -v=3: (20.168061338s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (20.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220127032017-6703 -n newest-cni-20220127032017-6703
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220127032017-6703 -n newest-cni-20220127032017-6703: exit status 7 (99.184532ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20220127032017-6703 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (53.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220127032017-6703 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.3-rc.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220127032017-6703 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.3-rc.0: (53.053693887s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220127032017-6703 -n newest-cni-20220127032017-6703
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (53.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20220127031622-6703 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context auto-20220127031622-6703 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-fkbkx" [8cc67ae5-757c-4e83-858b-d802c43aac2a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-fkbkx" [8cc67ae5-757c-4e83-858b-d802c43aac2a] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.008247228s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:163: (dbg) Run:  kubectl --context auto-20220127031622-6703 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:182: (dbg) Run:  kubectl --context auto-20220127031622-6703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:232: (dbg) Run:  kubectl --context auto-20220127031622-6703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (69.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p custom-weave-20220127031624-6703 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=containerd
E0127 03:22:09.641822    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220127025035-6703/client.crt: no such file or directory
E0127 03:22:20.622882    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220127024221-6703/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p custom-weave-20220127031624-6703 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=containerd: (1m9.474676967s)
--- PASS: TestNetworkPlugins/group/custom-weave/Start (69.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:258: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:269: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20220127032017-6703 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20220127032017-6703 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220127032017-6703 -n newest-cni-20220127032017-6703
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220127032017-6703 -n newest-cni-20220127032017-6703: exit status 2 (428.490463ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220127032017-6703 -n newest-cni-20220127032017-6703
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220127032017-6703 -n newest-cni-20220127032017-6703: exit status 2 (460.027706ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-20220127032017-6703 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220127032017-6703 -n newest-cni-20220127032017-6703
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220127032017-6703 -n newest-cni-20220127032017-6703
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:343: "calico-node-bbffl" [7c796491-a89a-468f-b825-3e40e1b318c8] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.016106744s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-20220127031624-6703 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context calico-20220127031624-6703 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-jgfxr" [1b74a380-1001-428f-9bce-781ab6e348c2] Pending

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
helpers_test.go:343: "netcat-668db85669-jgfxr" [1b74a380-1001-428f-9bce-781ab6e348c2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-jgfxr" [1b74a380-1001-428f-9bce-781ab6e348c2] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.159255748s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (89.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20220127031623-6703 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20220127031623-6703 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m29.862988222s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (89.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:163: (dbg) Run:  kubectl --context calico-20220127031624-6703 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:182: (dbg) Run:  kubectl --context calico-20220127031624-6703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:232: (dbg) Run:  kubectl --context calico-20220127031624-6703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20220127031714-6703 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [00468ce4-6c78-4c17-96e6-dbe286ef98e9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [00468ce4-6c78-4c17-96e6-dbe286ef98e9] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:181: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.014956317s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20220127031714-6703 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (66.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20220127031623-6703 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20220127031623-6703 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m6.459754621s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (66.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20220127031714-6703 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:190: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20220127031714-6703 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.550785588s)
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context old-k8s-version-20220127031714-6703 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (20.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20220127031714-6703 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20220127031714-6703 --alsologtostderr -v=3: (20.702472155s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (20.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-weave-20220127031624-6703 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context custom-weave-20220127031624-6703 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-wrdmm" [30bae54e-91a8-40f3-834e-2b11db01ee53] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-wrdmm" [30bae54e-91a8-40f3-834e-2b11db01ee53] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 10.00861507s
--- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (10.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220127031714-6703 -n old-k8s-version-20220127031714-6703
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220127031714-6703 -n old-k8s-version-20220127031714-6703: exit status 7 (101.472669ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20220127031714-6703 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (58.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20220127031623-6703 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20220127031623-6703 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd: (58.200150247s)
--- PASS: TestNetworkPlugins/group/bridge/Start (58.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (89.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20220127031624-6703 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd
E0127 03:23:41.076329    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/no-preload-20220127031716-6703/client.crt: no such file or directory
E0127 03:23:42.356482    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/no-preload-20220127031716-6703/client.crt: no such file or directory
E0127 03:23:44.916760    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/no-preload-20220127031716-6703/client.crt: no such file or directory
E0127 03:23:50.037156    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/no-preload-20220127031716-6703/client.crt: no such file or directory
E0127 03:23:58.677879    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/default-k8s-different-port-20220127031757-6703/client.crt: no such file or directory
E0127 03:23:58.683170    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/default-k8s-different-port-20220127031757-6703/client.crt: no such file or directory
E0127 03:23:58.693388    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/default-k8s-different-port-20220127031757-6703/client.crt: no such file or directory
E0127 03:23:58.713652    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/default-k8s-different-port-20220127031757-6703/client.crt: no such file or directory
E0127 03:23:58.753972    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/default-k8s-different-port-20220127031757-6703/client.crt: no such file or directory
E0127 03:23:58.834926    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/default-k8s-different-port-20220127031757-6703/client.crt: no such file or directory
E0127 03:23:58.995324    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/default-k8s-different-port-20220127031757-6703/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20220127031624-6703 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd: (1m29.963795891s)
--- PASS: TestNetworkPlugins/group/cilium/Start (89.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20220127031623-6703 "pgrep -a kubelet"
E0127 03:23:59.315935    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/default-k8s-different-port-20220127031757-6703/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context enable-default-cni-20220127031623-6703 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-kkln5" [624ac262-14e8-433d-8c45-0c4ad78f7b24] Pending
E0127 03:23:59.956700    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/default-k8s-different-port-20220127031757-6703/client.crt: no such file or directory
E0127 03:24:00.277556    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/no-preload-20220127031716-6703/client.crt: no such file or directory
helpers_test.go:343: "netcat-668db85669-kkln5" [624ac262-14e8-433d-8c45-0c4ad78f7b24] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0127 03:24:01.237793    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/default-k8s-different-port-20220127031757-6703/client.crt: no such file or directory
helpers_test.go:343: "netcat-668db85669-kkln5" [624ac262-14e8-433d-8c45-0c4ad78f7b24] Running
E0127 03:24:03.797923    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/default-k8s-different-port-20220127031757-6703/client.crt: no such file or directory
net_test.go:146: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.00613939s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220127031623-6703 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:182: (dbg) Run:  kubectl --context enable-default-cni-20220127031623-6703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0127 03:24:08.918188    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/default-k8s-different-port-20220127031757-6703/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:232: (dbg) Run:  kubectl --context enable-default-cni-20220127031623-6703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:343: "kindnet-g2mnm" [14b60db8-09ca-4fcb-bee3-1516226b4648] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.016633114s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20220127031623-6703 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context kindnet-20220127031623-6703 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-b2v6f" [b5fb6df8-459d-499a-b93d-78fcd05c6232] Pending
helpers_test.go:343: "netcat-668db85669-b2v6f" [b5fb6df8-459d-499a-b93d-78fcd05c6232] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0127 03:24:17.578072    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/addons-20220127024221-6703/client.crt: no such file or directory
E0127 03:24:19.159148    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/default-k8s-different-port-20220127031757-6703/client.crt: no such file or directory
helpers_test.go:343: "netcat-668db85669-b2v6f" [b5fb6df8-459d-499a-b93d-78fcd05c6232] Running
E0127 03:24:20.758404    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/no-preload-20220127031716-6703/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.005765232s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20220127031623-6703 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context bridge-20220127031623-6703 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-vbp9j" [529644a0-cf41-469c-aa20-459e154c8818] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
helpers_test.go:343: "netcat-668db85669-vbp9j" [529644a0-cf41-469c-aa20-459e154c8818] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.00692185s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220127031623-6703 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:182: (dbg) Run:  kubectl --context kindnet-20220127031623-6703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:232: (dbg) Run:  kubectl --context kindnet-20220127031623-6703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220127031623-6703 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:182: (dbg) Run:  kubectl --context bridge-20220127031623-6703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:232: (dbg) Run:  kubectl --context bridge-20220127031623-6703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:343: "cilium-zbbx4" [02a50ef1-0a1c-4a4e-9732-492c9eb9daeb] Running
net_test.go:107: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.012663796s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20220127031624-6703 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context cilium-20220127031624-6703 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-l9ssg" [8409c71f-a790-4f21-a408-55d557009645] Pending
helpers_test.go:343: "netcat-668db85669-l9ssg" [8409c71f-a790-4f21-a408-55d557009645] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0127 03:25:20.601010    6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/default-k8s-different-port-20220127031757-6703/client.crt: no such file or directory
helpers_test.go:343: "netcat-668db85669-l9ssg" [8409c71f-a790-4f21-a408-55d557009645] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 9.007580726s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:163: (dbg) Run:  kubectl --context cilium-20220127031624-6703 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:182: (dbg) Run:  kubectl --context cilium-20220127031624-6703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:232: (dbg) Run:  kubectl --context cilium-20220127031624-6703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.14s)

                                                
                                    

Test skip (25/289)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:123: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:142: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:158: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.2/cached-images
aaa_download_only_test.go:123: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.2/binaries
aaa_download_only_test.go:142: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.2/kubectl
aaa_download_only_test.go:158: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.23.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3-rc.0/cached-images
aaa_download_only_test.go:123: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.3-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3-rc.0/binaries
aaa_download_only_test.go:142: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.3-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3-rc.0/kubectl
aaa_download_only_test.go:158: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.23.3-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:449: Skipping Olm addon till images are fixed
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:36: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:187: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:456: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:35: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:39: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:43: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:43: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-20220127031756-6703" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20220127031756-6703
--- SKIP: TestStartStop/group/disable-driver-mounts (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:89: Skipping the test as containerd container runtimes requires CNI
helpers_test.go:176: Cleaning up "kubenet-20220127031622-6703" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-20220127031622-6703
--- SKIP: TestNetworkPlugins/group/kubenet (0.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:77: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:176: Cleaning up "flannel-20220127031623-6703" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p flannel-20220127031623-6703
--- SKIP: TestNetworkPlugins/group/flannel (0.50s)

                                                
                                    
Copied to clipboard